You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Hive QA (JIRA)" <ji...@apache.org> on 2018/02/01 07:52:00 UTC

[jira] [Commented] (HIVE-17396) Support DPP with map joins where the source and target belong in the same stage

    [ https://issues.apache.org/jira/browse/HIVE-17396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16348148#comment-16348148 ] 

Hive QA commented on HIVE-17396:
--------------------------------

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m  0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m  3s{color} | {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 419593e |
| Default Java | 1.8.0_111 |
| modules | C: common ql U: . |
| Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8963/yetus.txt |
| Powered by | Apache Yetus    http://yetus.apache.org |


This message was automatically generated.



> Support DPP with map joins where the source and target belong in the same stage
> -------------------------------------------------------------------------------
>
>                 Key: HIVE-17396
>                 URL: https://issues.apache.org/jira/browse/HIVE-17396
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Janaki Lahorani
>            Assignee: Janaki Lahorani
>            Priority: Major
>         Attachments: HIVE-17396.1.patch, HIVE-17396.2.patch, HIVE-17396.3.patch, HIVE-17396.4.patch, HIVE-17396.5.patch, HIVE-17396.6.patch, HIVE-17396.7.patch, HIVE-17396.8.patch, HIVE-17396.9.patch
>
>
> When the target of a partition pruning sink operator is in not the same as the target of hash table sink operator, both source and target gets scheduled within the same spark job, and that can result in File Not Found Exception.  HIVE-17225 has a fix to disable DPP in that scenario.  This JIRA is to support DPP for such cases.
> Test Case:
> SET hive.spark.dynamic.partition.pruning=true;
> SET hive.auto.convert.join=true;
> SET hive.strict.checks.cartesian.product=false;
> CREATE TABLE part_table1 (col int) PARTITIONED BY (part1_col int);
> CREATE TABLE part_table2 (col int) PARTITIONED BY (part2_col int);
> CREATE TABLE reg_table (col int);
> ALTER TABLE part_table1 ADD PARTITION (part1_col = 1);
> ALTER TABLE part_table2 ADD PARTITION (part2_col = 1);
> ALTER TABLE part_table2 ADD PARTITION (part2_col = 2);
> INSERT INTO TABLE part_table1 PARTITION (part1_col = 1) VALUES (1);
> INSERT INTO TABLE part_table2 PARTITION (part2_col = 1) VALUES (1);
> INSERT INTO TABLE part_table2 PARTITION (part2_col = 2) VALUES (2);
> INSERT INTO table reg_table VALUES (1), (2), (3), (4), (5), (6);
> EXPLAIN SELECT *
> FROM   part_table1 pt1,
>        part_table2 pt2,
>        reg_table rt
> WHERE  rt.col = pt1.part1_col
> AND    pt2.part2_col = pt1.part1_col;
> Plan:
> STAGE DEPENDENCIES:
>   Stage-2 is a root stage
>   Stage-1 depends on stages: Stage-2
>   Stage-0 depends on stages: Stage-1
> STAGE PLANS:
>   Stage: Stage-2
>     Spark
> #### A masked pattern was here ####
>       Vertices:
>         Map 1 
>             Map Operator Tree:
>                 TableScan
>                   alias: pt1
>                   Statistics: Num rows: 1 Data size: 1 Basic stats: COMPLETE Column stats: NONE
>                   Select Operator
>                     expressions: col (type: int), part1_col (type: int)
>                     outputColumnNames: _col0, _col1
>                     Statistics: Num rows: 1 Data size: 1 Basic stats: COMPLETE Column stats: NONE
>                     Spark HashTable Sink Operator
>                       keys:
>                         0 _col1 (type: int)
>                         1 _col1 (type: int)
>                         2 _col0 (type: int)
>                     Select Operator
>                       expressions: _col1 (type: int)
>                       outputColumnNames: _col0
>                       Statistics: Num rows: 1 Data size: 1 Basic stats: COMPLETE Column stats: NONE
>                       Group By Operator
>                         keys: _col0 (type: int)
>                         mode: hash
>                         outputColumnNames: _col0
>                         Statistics: Num rows: 1 Data size: 1 Basic stats: COMPLETE Column stats: NONE
>                         Spark Partition Pruning Sink Operator
>                           Target column: part2_col (int)
>                           partition key expr: part2_col
>                           Statistics: Num rows: 1 Data size: 1 Basic stats: COMPLETE Column stats: NONE
>                           target work: Map 2
>             Local Work:
>               Map Reduce Local Work
>         Map 2 
>             Map Operator Tree:
>                 TableScan
>                   alias: pt2
>                   Statistics: Num rows: 2 Data size: 2 Basic stats: COMPLETE Column stats: NONE
>                   Select Operator
>                     expressions: col (type: int), part2_col (type: int)
>                     outputColumnNames: _col0, _col1
>                     Statistics: Num rows: 2 Data size: 2 Basic stats: COMPLETE Column stats: NONE
>                     Spark HashTable Sink Operator
>                       keys:
>                         0 _col1 (type: int)
>                         1 _col1 (type: int)
>                         2 _col0 (type: int)
>             Local Work:
>               Map Reduce Local Work
>   Stage: Stage-1
>     Spark
> #### A masked pattern was here ####
>       Vertices:
>         Map 3 
>             Map Operator Tree:
>                 TableScan
>                   alias: rt
>                   Statistics: Num rows: 6 Data size: 6 Basic stats: COMPLETE Column stats: NONE
>                   Filter Operator
>                     predicate: col is not null (type: boolean)
>                     Statistics: Num rows: 6 Data size: 6 Basic stats: COMPLETE Column stats: NONE
>                     Select Operator
>                       expressions: col (type: int)
>                       outputColumnNames: _col0
>                       Statistics: Num rows: 6 Data size: 6 Basic stats: COMPLETE Column stats: NONE
>                       Map Join Operator
>                         condition map:
>                              Inner Join 0 to 1
>                              Inner Join 0 to 2
>                         keys:
>                           0 _col1 (type: int)
>                           1 _col1 (type: int)
>                           2 _col0 (type: int)
>                         outputColumnNames: _col0, _col1, _col2, _col3, _col4
>                         input vertices:
>                           0 Map 1
>                           1 Map 2
>                         Statistics: Num rows: 13 Data size: 13 Basic stats: COMPLETE Column stats: NONE
>                         File Output Operator
>                           compressed: false
>                           Statistics: Num rows: 13 Data size: 13 Basic stats: COMPLETE Column stats: NONE
>                           table:
>                               input format: org.apache.hadoop.mapred.SequenceFileInputFormat
>                               output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
>                               serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>             Local Work:
>               Map Reduce Local Work
>   Stage: Stage-0
>     Fetch Operator
>       limit: -1
>       Processor Tree:
>         ListSink



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)