You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues-all@impala.apache.org by "Qifan Chen (Jira)" <ji...@apache.org> on 2020/11/29 18:52:00 UTC
[jira] [Comment Edited] (IMPALA-9355)
TestExchangeMemUsage.test_exchange_mem_usage_scaling doesn't hit the memory
limit
[ https://issues.apache.org/jira/browse/IMPALA-9355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17240329#comment-17240329 ]
Qifan Chen edited comment on IMPALA-9355 at 11/29/20, 6:51 PM:
---------------------------------------------------------------
Tested the query in question (shown below) by observing the failure with memory allocation.
{code:java}
set mem_limit=<limit_in_mb>
set num_scanner_threads=1;
select *
from tpch_parquet.lineitem l1
join tpch_parquet.lineitem l2 on l1.l_orderkey = l2.l_orderkey and
l1.l_partkey = l2.l_partkey and l1.l_suppkey = l2.l_suppkey
and l1.l_linenumber = l2.l_linenumber
order by l1.l_orderkey desc, l1.l_partkey, l1.l_suppkey, l1.l_linenumber
limit 5;
{code}
It looks like the failure to allocate memory can occur in different query operators. Sometimes, it occurs in Exchange node #4. When set the memory limit to three different values (166.3MB, 165.5MB and 162MB), the list of # of exchange failures observed out of 100 total runs is as follows.
set mem_limit=166.3m; -- all 100 runs failed, 61 exchange allocation failures
set mem_limit=165.5m; -- all 100 runs failed, 57 exchange allocation failures
set mem_limit=162.5m; -- all 100 runs failed, 43 exchange allocation failures
Note that 162.5MB is the lowest memory limit below which the minimal memory reservation limit error will occur.
was (Author: sql_forever):
Tested the query in question (shown below) by observing the failure with memory allocation.
set mem_limit=<limit_in_mb>
set num_scanner_threads=1;
select *
from tpch_parquet.lineitem l1
join tpch_parquet.lineitem l2 on l1.l_orderkey = l2.l_orderkey and
l1.l_partkey = l2.l_partkey and l1.l_suppkey = l2.l_suppkey
and l1.l_linenumber = l2.l_linenumber
order by l1.l_orderkey desc, l1.l_partkey, l1.l_suppkey, l1.l_linenumber
limit 5;
It looks like the failure to allocate memory can occur in different query operators. Sometimes, it occurs in Exchange node #4. When set the memory limit to three different values (166.3MB, 165.5MB and 162MB), the list of # of exchange failures out of 100 total runs is as follows.
set mem_limit=166.3m; -- all 100 runs failed, 61 exchange allocation failures
set mem_limit=165.5m; -- all 100 runs failed, 57 exchange allocation failures
set mem_limit=162.5m; -- all 100 runs failed, 43 exchange allocation failures
Note that 162.5MB is the lowest memory limit below which the minimal memory reservation limit error will occur.
> TestExchangeMemUsage.test_exchange_mem_usage_scaling doesn't hit the memory limit
> ---------------------------------------------------------------------------------
>
> Key: IMPALA-9355
> URL: https://issues.apache.org/jira/browse/IMPALA-9355
> Project: IMPALA
> Issue Type: Bug
> Components: Backend
> Reporter: Fang-Yu Rao
> Assignee: Qifan Chen
> Priority: Critical
> Labels: broken-build, flaky
>
> The EE test {{test_exchange_mem_usage_scaling}} failed because the query at [https://github.com/apache/impala/blame/master/testdata/workloads/functional-query/queries/QueryTest/exchange-mem-scaling.test#L7-L15] does not hit the specified memory limit (170m) at [https://github.com/apache/impala/blame/master/testdata/workloads/functional-query/queries/QueryTest/exchange-mem-scaling.test#L7]. We may need to further reduce the specified limit. In what follows the error message is also given. Recall that the same issue occurred at https://issues.apache.org/jira/browse/IMPALA-7873 but was resolved.
> {code:java}
> FAIL query_test/test_mem_usage_scaling.py::TestExchangeMemUsage::()::test_exchange_mem_usage_scaling[protocol: beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 5000, 'disable_codegen': False, 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> =================================== FAILURES ===================================
> TestExchangeMemUsage.test_exchange_mem_usage_scaling[protocol: beeswax | exec_option: {'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 5000, 'disable_codegen': False, 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} | table_format: parquet/none]
> [gw3] linux2 -- Python 2.7.12 /home/ubuntu/Impala/bin/../infra/python/env/bin/python
> query_test/test_mem_usage_scaling.py:386: in test_exchange_mem_usage_scaling
> self.run_test_case('QueryTest/exchange-mem-scaling', vector)
> common/impala_test_suite.py:674: in run_test_case
> expected_str, query)
> E AssertionError: Expected exception: Memory limit exceeded
> E
> E when running:
> E
> E set mem_limit=170m;
> E set num_scanner_threads=1;
> E select *
> E from tpch_parquet.lineitem l1
> E join tpch_parquet.lineitem l2 on l1.l_orderkey = l2.l_orderkey and
> E l1.l_partkey = l2.l_partkey and l1.l_suppkey = l2.l_suppkey
> E and l1.l_linenumber = l2.l_linenumber
> E order by l1.l_orderkey desc, l1.l_partkey, l1.l_suppkey, l1.l_linenumber
> E limit 5
> {code}
> [~tarmstrong@cloudera.com] and [~joemcdonnell] reviewed the patch at [https://gerrit.cloudera.org/c/11965/]. Assign this JIRA to [~joemcdonnell] for now. Please re-assign the JIRA to others as appropriate. Thanks!
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-all-unsubscribe@impala.apache.org
For additional commands, e-mail: issues-all-help@impala.apache.org