You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Hive QA (JIRA)" <ji...@apache.org> on 2017/06/22 03:54:00 UTC
[jira] [Commented] (HIVE-16793) Scalar sub-query: sq_count_check
not required if gby keys are constant
[ https://issues.apache.org/jira/browse/HIVE-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16058689#comment-16058689 ]
Hive QA commented on HIVE-16793:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12873969/HIVE-16793.2.patch
{color:red}ERROR:{color} -1 due to build exiting with an error
Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/5725/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/5725/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-5725/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2017-06-22 03:53:47.754
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-5725/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-06-22 03:53:47.756
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 12005fd HIVE-16918 : Skip ReplCopyTask distcp for _metadata copying. Also enable -pb for distcp (Sushanth Sowmyan, reviewed by Thejas Nair, Anishek Agarwal)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 12005fd HIVE-16918 : Skip ReplCopyTask distcp for _metadata copying. Also enable -pb for distcp (Sushanth Sowmyan, reviewed by Thejas Nair, Anishek Agarwal)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-06-22 03:53:53.219
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch
error: a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveSubQueryRemoveRule.java: No such file or directory
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java: No such file or directory
error: a/ql/src/test/results/clientnegative/subquery_scalar_multi_rows.q.out: No such file or directory
error: a/ql/src/test/results/clientpositive/llap/subquery_scalar.q.out: No such file or directory
error: a/ql/src/test/results/clientpositive/llap/subquery_select.q.out: No such file or directory
error: a/ql/src/test/results/clientpositive/perf/query23.q.out: No such file or directory
error: a/ql/src/test/results/clientpositive/perf/query44.q.out: No such file or directory
error: a/ql/src/test/results/clientpositive/perf/query54.q.out: No such file or directory
error: a/ql/src/test/results/clientpositive/perf/query58.q.out: No such file or directory
error: a/ql/src/test/results/clientpositive/perf/query6.q.out: No such file or directory
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12873969 - PreCommit-HIVE-Build
> Scalar sub-query: sq_count_check not required if gby keys are constant
> ----------------------------------------------------------------------
>
> Key: HIVE-16793
> URL: https://issues.apache.org/jira/browse/HIVE-16793
> Project: Hive
> Issue Type: Bug
> Components: SQL
> Affects Versions: 3.0.0
> Reporter: Gopal V
> Assignee: Vineet Garg
> Attachments: HIVE-16793.1.patch, HIVE-16793.2.patch
>
>
> This query has an sq_count check, though is useless on a constant key.
> {code}
> hive> explain select * from part where p_size > (select max(p_size) from part where p_type = '1' group by p_type);
> Warning: Map Join MAPJOIN[37][bigTable=?] in task 'Map 1' is a cross product
> Warning: Map Join MAPJOIN[36][bigTable=?] in task 'Map 1' is a cross product
> OK
> Plan optimized by CBO.
> Vertex dependency in root stage
> Map 1 <- Reducer 4 (BROADCAST_EDGE), Reducer 6 (BROADCAST_EDGE)
> Reducer 3 <- Map 2 (SIMPLE_EDGE)
> Reducer 4 <- Reducer 3 (CUSTOM_SIMPLE_EDGE)
> Reducer 6 <- Map 5 (SIMPLE_EDGE)
> Stage-0
> Fetch Operator
> limit:-1
> Stage-1
> Map 1 vectorized, llap
> File Output Operator [FS_64]
> Select Operator [SEL_63] (rows=66666666 width=621)
> Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8"]
> Filter Operator [FIL_62] (rows=66666666 width=625)
> predicate:(_col5 > _col10)
> Map Join Operator [MAPJOIN_61] (rows=200000000 width=625)
> Conds:(Inner),Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8","_col10"]
> <-Reducer 6 [BROADCAST_EDGE] vectorized, llap
> BROADCAST [RS_58]
> Select Operator [SEL_57] (rows=1 width=4)
> Output:["_col0"]
> Group By Operator [GBY_56] (rows=1 width=89)
> Output:["_col0","_col1"],aggregations:["max(VALUE._col0)"],keys:KEY._col0
> <-Map 5 [SIMPLE_EDGE] vectorized, llap
> SHUFFLE [RS_55]
> PartitionCols:_col0
> Group By Operator [GBY_54] (rows=86 width=89)
> Output:["_col0","_col1"],aggregations:["max(_col1)"],keys:'1'
> Select Operator [SEL_53] (rows=1212121 width=109)
> Output:["_col1"]
> Filter Operator [FIL_52] (rows=1212121 width=109)
> predicate:(p_type = '1')
> TableScan [TS_17] (rows=200000000 width=109)
> tpch_flat_orc_1000@part,part,Tbl:COMPLETE,Col:COMPLETE,Output:["p_type","p_size"]
> <-Map Join Operator [MAPJOIN_60] (rows=200000000 width=621)
> Conds:(Inner),Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8"]
> <-Reducer 4 [BROADCAST_EDGE] vectorized, llap
> BROADCAST [RS_51]
> Select Operator [SEL_50] (rows=1 width=8)
> Filter Operator [FIL_49] (rows=1 width=8)
> predicate:(sq_count_check(_col0) <= 1)
> Group By Operator [GBY_48] (rows=1 width=8)
> Output:["_col0"],aggregations:["count(VALUE._col0)"]
> <-Reducer 3 [CUSTOM_SIMPLE_EDGE] vectorized, llap
> PARTITION_ONLY_SHUFFLE [RS_47]
> Group By Operator [GBY_46] (rows=1 width=8)
> Output:["_col0"],aggregations:["count()"]
> Select Operator [SEL_45] (rows=1 width=85)
> Group By Operator [GBY_44] (rows=1 width=85)
> Output:["_col0"],keys:KEY._col0
> <-Map 2 [SIMPLE_EDGE] vectorized, llap
> SHUFFLE [RS_43]
> PartitionCols:_col0
> Group By Operator [GBY_42] (rows=83 width=85)
> Output:["_col0"],keys:'1'
> Select Operator [SEL_41] (rows=1212121 width=105)
> Filter Operator [FIL_40] (rows=1212121 width=105)
> predicate:(p_type = '1')
> TableScan [TS_2] (rows=200000000 width=105)
> tpch_flat_orc_1000@part,part,Tbl:COMPLETE,Col:COMPLETE,Output:["p_type"]
> <-Select Operator [SEL_59] (rows=200000000 width=621)
> Output:["_col0","_col1","_col2","_col3","_col4","_col5","_col6","_col7","_col8"]
> TableScan [TS_0] (rows=200000000 width=621)
> tpch_flat_orc_1000@part,part,Tbl:COMPLETE,Col:COMPLETE,Output:["p_partkey","p_name","p_mfgr","p_brand","p_type","p_size","p_container","p_retailprice","p_comment"]
> {code}
> -The other version without the filter is missing the check, though the compiler cannot assume the nDV of p_type.- Fixed by HIVE-16851
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)