You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hive.apache.org by br...@apache.org on 2014/11/29 04:44:28 UTC
svn commit: r1642395 [10/22] - in /hive/branches/spark/ql/src:
java/org/apache/hadoop/hive/ql/exec/spark/
java/org/apache/hadoop/hive/ql/exec/spark/session/
test/results/clientpositive/ test/results/clientpositive/spark/
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/limit_pushdown.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/limit_pushdown.q.out?rev=1642395&r1=1642394&r2=1642395&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/limit_pushdown.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/limit_pushdown.q.out Sat Nov 29 03:44:22 2014
@@ -727,23 +727,19 @@ STAGE PLANS:
keys: KEY._col0 (type: string)
mode: mergepartial
outputColumnNames: _col0, _col1
- Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE Column stats: NONE
- Select Operator
- expressions: _col0 (type: string), _col1 (type: double)
- outputColumnNames: _col0, _col1
- Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE Column stats: NONE
- Reduce Output Operator
- key expressions: _col1 (type: double)
- sort order: +
- Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE Column stats: NONE
- TopN Hash Memory Usage: 0.3
- value expressions: _col0 (type: string)
+ Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
+ Reduce Output Operator
+ key expressions: _col1 (type: double)
+ sort order: +
+ Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
+ TopN Hash Memory Usage: 0.3
+ value expressions: _col0 (type: string)
Reducer 3
Reduce Operator Tree:
Select Operator
expressions: VALUE._col0 (type: string), KEY.reducesinkkey0 (type: double)
outputColumnNames: _col0, _col1
- Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
Limit
Number of rows: 20
Statistics: Num rows: 20 Data size: 200 Basic stats: COMPLETE Column stats: NONE
@@ -849,23 +845,19 @@ STAGE PLANS:
mode: mergepartial
outputColumnNames: _col0, _col1
Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE Column stats: NONE
- Select Operator
- expressions: _col0 (type: string), _col1 (type: bigint)
- outputColumnNames: _col0, _col1
- Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE Column stats: NONE
- Limit
- Number of rows: 2
- Statistics: Num rows: 2 Data size: 20 Basic stats: COMPLETE Column stats: NONE
- Filter Operator
- predicate: _col0 is not null (type: boolean)
- Statistics: Num rows: 1 Data size: 10 Basic stats: COMPLETE Column stats: NONE
- Spark HashTable Sink Operator
- condition expressions:
- 0 {_col1}
- 1 {_col0} {_col1}
- keys:
- 0 _col0 (type: string)
- 1 _col0 (type: string)
+ Limit
+ Number of rows: 2
+ Statistics: Num rows: 2 Data size: 20 Basic stats: COMPLETE Column stats: NONE
+ Filter Operator
+ predicate: _col0 is not null (type: boolean)
+ Statistics: Num rows: 1 Data size: 10 Basic stats: COMPLETE Column stats: NONE
+ Spark HashTable Sink Operator
+ condition expressions:
+ 0 {_col1}
+ 1 {_col0} {_col1}
+ keys:
+ 0 _col0 (type: string)
+ 1 _col0 (type: string)
Stage: Stage-1
Spark
@@ -904,18 +896,14 @@ STAGE PLANS:
mode: mergepartial
outputColumnNames: _col0, _col1
Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE Column stats: NONE
- Select Operator
- expressions: _col0 (type: string), _col1 (type: bigint)
- outputColumnNames: _col0, _col1
- Statistics: Num rows: 250 Data size: 2656 Basic stats: COMPLETE Column stats: NONE
- Limit
- Number of rows: 3
+ Limit
+ Number of rows: 3
+ Statistics: Num rows: 3 Data size: 30 Basic stats: COMPLETE Column stats: NONE
+ Reduce Output Operator
+ sort order:
Statistics: Num rows: 3 Data size: 30 Basic stats: COMPLETE Column stats: NONE
- Reduce Output Operator
- sort order:
- Statistics: Num rows: 3 Data size: 30 Basic stats: COMPLETE Column stats: NONE
- TopN Hash Memory Usage: 0.3
- value expressions: _col0 (type: string), _col1 (type: bigint)
+ TopN Hash Memory Usage: 0.3
+ value expressions: _col0 (type: string), _col1 (type: bigint)
Reducer 5
Local Work:
Map Reduce Local Work
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/load_dyn_part13.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/load_dyn_part13.q.out?rev=1642395&r1=1642394&r2=1642395&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/load_dyn_part13.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/load_dyn_part13.q.out Sat Nov 29 03:44:22 2014
@@ -73,39 +73,33 @@ STAGE PLANS:
TableScan
alias: src
Filter Operator
- predicate: ((key > 20) and (key < 40)) (type: boolean)
+ predicate: (key < 20) (type: boolean)
Select Operator
- expressions: key (type: string), value (type: string), '33' (type: string)
+ expressions: key (type: string), value (type: string), '22' (type: string)
outputColumnNames: _col0, _col1, _col2
- Select Operator
- expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string)
- outputColumnNames: _col0, _col1, _col2
- File Output Operator
- compressed: false
- table:
- input format: org.apache.hadoop.mapred.TextInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- name: default.nzhang_part13
+ File Output Operator
+ compressed: false
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ name: default.nzhang_part13
Map 3
Map Operator Tree:
TableScan
alias: src
Filter Operator
- predicate: (key < 20) (type: boolean)
+ predicate: ((key > 20) and (key < 40)) (type: boolean)
Select Operator
- expressions: key (type: string), value (type: string), '22' (type: string)
+ expressions: key (type: string), value (type: string), '33' (type: string)
outputColumnNames: _col0, _col1, _col2
- Select Operator
- expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string)
- outputColumnNames: _col0, _col1, _col2
- File Output Operator
- compressed: false
- table:
- input format: org.apache.hadoop.mapred.TextInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- name: default.nzhang_part13
+ File Output Operator
+ compressed: false
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ name: default.nzhang_part13
Union 2
Vertex: Union 2
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/load_dyn_part14.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/load_dyn_part14.q.out?rev=1642395&r1=1642394&r2=1642395&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/load_dyn_part14.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/load_dyn_part14.q.out Sat Nov 29 03:44:22 2014
@@ -74,7 +74,7 @@ STAGE PLANS:
alias: src
Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: COMPLETE
Select Operator
- expressions: 'k2' (type: string), '' (type: string)
+ expressions: 'k1' (type: string), UDFToString(null) (type: string)
outputColumnNames: _col0, _col1
Statistics: Num rows: 500 Data size: 85000 Basic stats: COMPLETE Column stats: COMPLETE
Limit
@@ -90,15 +90,15 @@ STAGE PLANS:
alias: src
Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: COMPLETE
Select Operator
- expressions: 'k3' (type: string), ' ' (type: string)
+ expressions: 'k2' (type: string), '' (type: string)
outputColumnNames: _col0, _col1
- Statistics: Num rows: 500 Data size: 85500 Basic stats: COMPLETE Column stats: COMPLETE
+ Statistics: Num rows: 500 Data size: 85000 Basic stats: COMPLETE Column stats: COMPLETE
Limit
Number of rows: 2
- Statistics: Num rows: 2 Data size: 342 Basic stats: COMPLETE Column stats: COMPLETE
+ Statistics: Num rows: 2 Data size: 340 Basic stats: COMPLETE Column stats: COMPLETE
Reduce Output Operator
sort order:
- Statistics: Num rows: 2 Data size: 342 Basic stats: COMPLETE Column stats: COMPLETE
+ Statistics: Num rows: 2 Data size: 340 Basic stats: COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: string), _col1 (type: string)
Map 6
Map Operator Tree:
@@ -106,15 +106,15 @@ STAGE PLANS:
alias: src
Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: COMPLETE
Select Operator
- expressions: 'k1' (type: string), UDFToString(null) (type: string)
+ expressions: 'k3' (type: string), ' ' (type: string)
outputColumnNames: _col0, _col1
- Statistics: Num rows: 500 Data size: 85000 Basic stats: COMPLETE Column stats: COMPLETE
+ Statistics: Num rows: 500 Data size: 85500 Basic stats: COMPLETE Column stats: COMPLETE
Limit
Number of rows: 2
- Statistics: Num rows: 2 Data size: 340 Basic stats: COMPLETE Column stats: COMPLETE
+ Statistics: Num rows: 2 Data size: 342 Basic stats: COMPLETE Column stats: COMPLETE
Reduce Output Operator
sort order:
- Statistics: Num rows: 2 Data size: 340 Basic stats: COMPLETE Column stats: COMPLETE
+ Statistics: Num rows: 2 Data size: 342 Basic stats: COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: string), _col1 (type: string)
Reducer 2
Reduce Operator Tree:
@@ -123,16 +123,13 @@ STAGE PLANS:
outputColumnNames: _col0, _col1
Limit
Number of rows: 2
- Select Operator
- expressions: _col0 (type: string), _col1 (type: string)
- outputColumnNames: _col0, _col1
- File Output Operator
- compressed: false
- table:
- input format: org.apache.hadoop.mapred.TextInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- name: default.nzhang_part14
+ File Output Operator
+ compressed: false
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ name: default.nzhang_part14
Reducer 5
Reduce Operator Tree:
Select Operator
@@ -140,16 +137,13 @@ STAGE PLANS:
outputColumnNames: _col0, _col1
Limit
Number of rows: 2
- Select Operator
- expressions: _col0 (type: string), _col1 (type: string)
- outputColumnNames: _col0, _col1
- File Output Operator
- compressed: false
- table:
- input format: org.apache.hadoop.mapred.TextInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- name: default.nzhang_part14
+ File Output Operator
+ compressed: false
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ name: default.nzhang_part14
Reducer 7
Reduce Operator Tree:
Select Operator
@@ -157,16 +151,13 @@ STAGE PLANS:
outputColumnNames: _col0, _col1
Limit
Number of rows: 2
- Select Operator
- expressions: _col0 (type: string), _col1 (type: string)
- outputColumnNames: _col0, _col1
- File Output Operator
- compressed: false
- table:
- input format: org.apache.hadoop.mapred.TextInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- name: default.nzhang_part14
+ File Output Operator
+ compressed: false
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ name: default.nzhang_part14
Union 3
Vertex: Union 3
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/louter_join_ppr.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/louter_join_ppr.q.out?rev=1642395&r1=1642394&r2=1642395&view=diff
==============================================================================
Files hive/branches/spark/ql/src/test/results/clientpositive/spark/louter_join_ppr.q.out (original) and hive/branches/spark/ql/src/test/results/clientpositive/spark/louter_join_ppr.q.out Sat Nov 29 03:44:22 2014 differ
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin1.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin1.q.out?rev=1642395&r1=1642394&r2=1642395&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin1.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin1.q.out Sat Nov 29 03:44:22 2014
@@ -39,7 +39,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 2
+ Map 1
Map Operator Tree:
TableScan
alias: a
@@ -61,7 +61,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 2
Map Operator Tree:
TableScan
alias: b
@@ -80,7 +80,7 @@ STAGE PLANS:
1 key (type: string)
outputColumnNames: _col0, _col1, _col5, _col6
input vertices:
- 0 Map 2
+ 0 Map 1
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col5 (type: string), _col6 (type: string)
@@ -141,7 +141,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 2
+ Map 1
Map Operator Tree:
TableScan
alias: a
@@ -163,7 +163,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 2
Map Operator Tree:
TableScan
alias: b
@@ -182,7 +182,7 @@ STAGE PLANS:
1 key (type: string)
outputColumnNames: _col0, _col1, _col5, _col6
input vertices:
- 0 Map 2
+ 0 Map 1
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col5 (type: string), _col6 (type: string)
@@ -245,7 +245,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 2
Map Operator Tree:
TableScan
alias: a
@@ -267,7 +267,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 2
+ Map 1
Map Operator Tree:
TableScan
alias: src
@@ -290,7 +290,7 @@ STAGE PLANS:
1 _col0 (type: string)
outputColumnNames: _col0, _col1, _col5, _col6
input vertices:
- 0 Map 1
+ 0 Map 2
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col5 (type: string), _col6 (type: struct<key:string,value:string>)
@@ -351,7 +351,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 2
+ Map 1
Map Operator Tree:
TableScan
alias: a
@@ -370,7 +370,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 2
Map Operator Tree:
TableScan
alias: b
@@ -386,7 +386,7 @@ STAGE PLANS:
1 key (type: string)
outputColumnNames: _col0, _col1, _col5, _col6
input vertices:
- 0 Map 2
+ 0 Map 1
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col5 (type: string), _col6 (type: string)
@@ -445,7 +445,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 2
+ Map 1
Map Operator Tree:
TableScan
alias: a
@@ -467,7 +467,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 2
Map Operator Tree:
TableScan
alias: b
@@ -486,7 +486,7 @@ STAGE PLANS:
1 key (type: string)
outputColumnNames: _col0, _col1, _col5, _col6
input vertices:
- 0 Map 2
+ 0 Map 1
Statistics: Num rows: 182 Data size: 1939 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col5 (type: string), _col6 (type: string)
@@ -547,7 +547,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 2
Map Operator Tree:
TableScan
alias: a
@@ -566,7 +566,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 2
+ Map 1
Map Operator Tree:
TableScan
alias: src
@@ -589,7 +589,7 @@ STAGE PLANS:
1 _col0 (type: string)
outputColumnNames: _col0, _col1, _col5, _col6
input vertices:
- 0 Map 1
+ 0 Map 2
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col5 (type: string), _col6 (type: struct<key:string,value:string>)
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_decimal.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_decimal.q.out?rev=1642395&r1=1642394&r2=1642395&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_decimal.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_decimal.q.out Sat Nov 29 03:44:22 2014
@@ -92,7 +92,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 2
Map Operator Tree:
TableScan
alias: t2
@@ -114,7 +114,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 2
+ Map 1
Map Operator Tree:
TableScan
alias: t1
@@ -133,7 +133,7 @@ STAGE PLANS:
1 dec (type: decimal(4,0))
outputColumnNames: _col0, _col4
input vertices:
- 1 Map 1
+ 1 Map 2
Statistics: Num rows: 577 Data size: 64680 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: decimal(4,2)), _col4 (type: decimal(4,0))
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_distinct.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_distinct.q.out?rev=1642395&r1=1642394&r2=1642395&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_distinct.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_distinct.q.out Sat Nov 29 03:44:22 2014
@@ -20,7 +20,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 4
Map Operator Tree:
TableScan
alias: d
@@ -41,11 +41,11 @@ STAGE PLANS:
Stage: Stage-1
Spark
Edges:
- Reducer 3 <- Map 2 (GROUP PARTITION-LEVEL SORT, 1)
- Reducer 4 <- Reducer 3 (GROUP, 1)
+ Reducer 2 <- Map 1 (GROUP PARTITION-LEVEL SORT, 1)
+ Reducer 3 <- Reducer 2 (GROUP, 1)
#### A masked pattern was here ####
Vertices:
- Map 2
+ Map 1
Map Operator Tree:
TableScan
alias: c
@@ -64,25 +64,21 @@ STAGE PLANS:
1 key (type: string)
outputColumnNames: _col1
input vertices:
- 1 Map 1
+ 1 Map 4
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- Select Operator
- expressions: _col1 (type: string)
- outputColumnNames: _col1
+ Group By Operator
+ keys: _col1 (type: string)
+ mode: hash
+ outputColumnNames: _col0
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- Group By Operator
- keys: _col1 (type: string)
- mode: hash
- outputColumnNames: _col0
+ Reduce Output Operator
+ key expressions: _col0 (type: string)
+ sort order: +
+ Map-reduce partition columns: rand() (type: double)
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- Reduce Output Operator
- key expressions: _col0 (type: string)
- sort order: +
- Map-reduce partition columns: rand() (type: double)
- Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
Local Work:
Map Reduce Local Work
- Reducer 3
+ Reducer 2
Reduce Operator Tree:
Group By Operator
keys: KEY._col0 (type: string)
@@ -94,7 +90,7 @@ STAGE PLANS:
sort order: +
Map-reduce partition columns: _col0 (type: string)
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- Reducer 4
+ Reducer 3
Reduce Operator Tree:
Group By Operator
keys: KEY._col0 (type: string)
@@ -169,7 +165,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 3
Map Operator Tree:
TableScan
alias: d
@@ -190,10 +186,10 @@ STAGE PLANS:
Stage: Stage-1
Spark
Edges:
- Reducer 3 <- Map 2 (GROUP, 3)
+ Reducer 2 <- Map 1 (GROUP, 3)
#### A masked pattern was here ####
Vertices:
- Map 2
+ Map 1
Map Operator Tree:
TableScan
alias: c
@@ -212,25 +208,21 @@ STAGE PLANS:
1 key (type: string)
outputColumnNames: _col1
input vertices:
- 1 Map 1
+ 1 Map 3
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- Select Operator
- expressions: _col1 (type: string)
- outputColumnNames: _col1
+ Group By Operator
+ keys: _col1 (type: string)
+ mode: hash
+ outputColumnNames: _col0
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- Group By Operator
- keys: _col1 (type: string)
- mode: hash
- outputColumnNames: _col0
+ Reduce Output Operator
+ key expressions: _col0 (type: string)
+ sort order: +
+ Map-reduce partition columns: _col0 (type: string)
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- Reduce Output Operator
- key expressions: _col0 (type: string)
- sort order: +
- Map-reduce partition columns: _col0 (type: string)
- Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
Local Work:
Map Reduce Local Work
- Reducer 3
+ Reducer 2
Reduce Operator Tree:
Group By Operator
keys: KEY._col0 (type: string)
@@ -305,7 +297,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 4
Map Operator Tree:
TableScan
alias: d
@@ -326,11 +318,11 @@ STAGE PLANS:
Stage: Stage-1
Spark
Edges:
- Reducer 3 <- Map 2 (GROUP PARTITION-LEVEL SORT, 3)
- Reducer 4 <- Reducer 3 (GROUP, 3)
+ Reducer 2 <- Map 1 (GROUP PARTITION-LEVEL SORT, 3)
+ Reducer 3 <- Reducer 2 (GROUP, 3)
#### A masked pattern was here ####
Vertices:
- Map 2
+ Map 1
Map Operator Tree:
TableScan
alias: c
@@ -349,20 +341,16 @@ STAGE PLANS:
1 key (type: string)
outputColumnNames: _col1
input vertices:
- 1 Map 1
+ 1 Map 4
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- Select Operator
- expressions: _col1 (type: string)
- outputColumnNames: _col1
+ Reduce Output Operator
+ key expressions: _col1 (type: string)
+ sort order: +
+ Map-reduce partition columns: rand() (type: double)
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- Reduce Output Operator
- key expressions: _col1 (type: string)
- sort order: +
- Map-reduce partition columns: rand() (type: double)
- Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
Local Work:
Map Reduce Local Work
- Reducer 3
+ Reducer 2
Reduce Operator Tree:
Group By Operator
keys: KEY._col0 (type: string)
@@ -374,7 +362,7 @@ STAGE PLANS:
sort order: +
Map-reduce partition columns: _col0 (type: string)
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- Reducer 4
+ Reducer 3
Reduce Operator Tree:
Group By Operator
keys: KEY._col0 (type: string)
@@ -449,7 +437,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 3
Map Operator Tree:
TableScan
alias: d
@@ -470,10 +458,10 @@ STAGE PLANS:
Stage: Stage-1
Spark
Edges:
- Reducer 3 <- Map 2 (GROUP, 3)
+ Reducer 2 <- Map 1 (GROUP, 3)
#### A masked pattern was here ####
Vertices:
- Map 2
+ Map 1
Map Operator Tree:
TableScan
alias: c
@@ -492,20 +480,16 @@ STAGE PLANS:
1 key (type: string)
outputColumnNames: _col1
input vertices:
- 1 Map 1
+ 1 Map 3
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- Select Operator
- expressions: _col1 (type: string)
- outputColumnNames: _col1
+ Reduce Output Operator
+ key expressions: _col1 (type: string)
+ sort order: +
+ Map-reduce partition columns: _col1 (type: string)
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- Reduce Output Operator
- key expressions: _col1 (type: string)
- sort order: +
- Map-reduce partition columns: _col1 (type: string)
- Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
Local Work:
Map Reduce Local Work
- Reducer 3
+ Reducer 2
Reduce Operator Tree:
Group By Operator
keys: KEY._col0 (type: string)
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_filter_on_outerjoin.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_filter_on_outerjoin.q.out?rev=1642395&r1=1642394&r2=1642395&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_filter_on_outerjoin.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_filter_on_outerjoin.q.out Sat Nov 29 03:44:22 2014
@@ -65,15 +65,15 @@ STAGE PLANS:
Map 1
Map Operator Tree:
TableScan
- alias: src2
+ alias: src1
Statistics: Num rows: 25 Data size: 191 Basic stats: COMPLETE Column stats: NONE
Filter Operator
- predicate: (key < 300) (type: boolean)
- Statistics: Num rows: 8 Data size: 61 Basic stats: COMPLETE Column stats: NONE
+ predicate: ((key < 300) and (key < 10)) (type: boolean)
+ Statistics: Num rows: 2 Data size: 15 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
condition expressions:
- 0 {key} {value}
- 1 {value}
+ 0 {value}
+ 1 {key} {value}
2 {key} {value}
filter predicates:
0
@@ -85,18 +85,18 @@ STAGE PLANS:
2 key (type: string)
Local Work:
Map Reduce Local Work
- Map 4
+ Map 2
Map Operator Tree:
TableScan
- alias: src1
+ alias: src2
Statistics: Num rows: 25 Data size: 191 Basic stats: COMPLETE Column stats: NONE
Filter Operator
- predicate: ((key < 300) and (key < 10)) (type: boolean)
- Statistics: Num rows: 2 Data size: 15 Basic stats: COMPLETE Column stats: NONE
+ predicate: (key < 300) (type: boolean)
+ Statistics: Num rows: 8 Data size: 61 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
condition expressions:
- 0 {value}
- 1 {key} {value}
+ 0 {key} {value}
+ 1 {value}
2 {key} {value}
filter predicates:
0
@@ -112,10 +112,10 @@ STAGE PLANS:
Stage: Stage-1
Spark
Edges:
- Reducer 3 <- Map 2 (SORT, 3)
+ Reducer 4 <- Map 3 (SORT, 3)
#### A masked pattern was here ####
Vertices:
- Map 2
+ Map 3
Map Operator Tree:
TableScan
alias: src3
@@ -141,8 +141,8 @@ STAGE PLANS:
2 key (type: string)
outputColumnNames: _col0, _col1, _col5, _col6, _col10, _col11
input vertices:
- 0 Map 4
- 1 Map 1
+ 0 Map 1
+ 1 Map 2
Statistics: Num rows: 365 Data size: 3878 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col5 (type: string), _col6 (type: string), _col10 (type: string), _col11 (type: string)
@@ -155,7 +155,7 @@ STAGE PLANS:
value expressions: _col1 (type: string), _col3 (type: string), _col5 (type: string)
Local Work:
Map Reduce Local Work
- Reducer 3
+ Reducer 4
Reduce Operator Tree:
Select Operator
expressions: KEY.reducesinkkey0 (type: string), VALUE._col0 (type: string), KEY.reducesinkkey1 (type: string), VALUE._col1 (type: string), KEY.reducesinkkey2 (type: string), VALUE._col2 (type: string)
@@ -238,15 +238,15 @@ STAGE PLANS:
Map 1
Map Operator Tree:
TableScan
- alias: src2
+ alias: src1
Statistics: Num rows: 25 Data size: 191 Basic stats: COMPLETE Column stats: NONE
Filter Operator
- predicate: (key < 300) (type: boolean)
- Statistics: Num rows: 8 Data size: 61 Basic stats: COMPLETE Column stats: NONE
+ predicate: ((key < 300) and (key < 10)) (type: boolean)
+ Statistics: Num rows: 2 Data size: 15 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
condition expressions:
- 0 {key} {value}
- 1 {value}
+ 0 {value}
+ 1 {key} {value}
2 {key} {value}
filter predicates:
0
@@ -258,18 +258,18 @@ STAGE PLANS:
2 key (type: string)
Local Work:
Map Reduce Local Work
- Map 4
+ Map 2
Map Operator Tree:
TableScan
- alias: src1
+ alias: src2
Statistics: Num rows: 25 Data size: 191 Basic stats: COMPLETE Column stats: NONE
Filter Operator
- predicate: ((key < 300) and (key < 10)) (type: boolean)
- Statistics: Num rows: 2 Data size: 15 Basic stats: COMPLETE Column stats: NONE
+ predicate: (key < 300) (type: boolean)
+ Statistics: Num rows: 8 Data size: 61 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
condition expressions:
- 0 {value}
- 1 {key} {value}
+ 0 {key} {value}
+ 1 {value}
2 {key} {value}
filter predicates:
0
@@ -285,10 +285,10 @@ STAGE PLANS:
Stage: Stage-1
Spark
Edges:
- Reducer 3 <- Map 2 (SORT, 3)
+ Reducer 4 <- Map 3 (SORT, 3)
#### A masked pattern was here ####
Vertices:
- Map 2
+ Map 3
Map Operator Tree:
TableScan
alias: src3
@@ -314,8 +314,8 @@ STAGE PLANS:
2 key (type: string)
outputColumnNames: _col0, _col1, _col5, _col6, _col10, _col11
input vertices:
- 0 Map 4
- 1 Map 1
+ 0 Map 1
+ 1 Map 2
Statistics: Num rows: 365 Data size: 3878 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col5 (type: string), _col6 (type: string), _col10 (type: string), _col11 (type: string)
@@ -328,7 +328,7 @@ STAGE PLANS:
value expressions: _col1 (type: string), _col3 (type: string), _col5 (type: string)
Local Work:
Map Reduce Local Work
- Reducer 3
+ Reducer 4
Reduce Operator Tree:
Select Operator
expressions: KEY.reducesinkkey0 (type: string), VALUE._col0 (type: string), KEY.reducesinkkey1 (type: string), VALUE._col1 (type: string), KEY.reducesinkkey2 (type: string), VALUE._col2 (type: string)
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_hook.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_hook.q.out?rev=1642395&r1=1642394&r2=1642395&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_hook.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_hook.q.out Sat Nov 29 03:44:22 2014
@@ -38,6 +38,8 @@ PREHOOK: Input: default@srcpart@ds=2008-
PREHOOK: Input: default@srcpart@ds=2008-04-09/hr=11
PREHOOK: Input: default@srcpart@ds=2008-04-09/hr=12
PREHOOK: Output: default@dest1
+Status: Failed
+Status: Failed
[MapJoinCounter PostHook] COMMON_JOIN: 0 HINTED_MAPJOIN: 0 HINTED_MAPJOIN_LOCAL: 0 CONVERTED_MAPJOIN: 0 CONVERTED_MAPJOIN_LOCAL: 0 BACKUP_COMMON_JOIN: 0
RUN: Stage-3:MAPRED
RUN: Stage-1:MAPRED
@@ -48,6 +50,9 @@ INSERT OVERWRITE TABLE dest1 SELECT src1
PREHOOK: type: QUERY
PREHOOK: Input: default@src
PREHOOK: Output: default@dest1
+Status: Failed
+Status: Failed
+Status: Failed
[MapJoinCounter PostHook] COMMON_JOIN: 0 HINTED_MAPJOIN: 0 HINTED_MAPJOIN_LOCAL: 0 CONVERTED_MAPJOIN: 0 CONVERTED_MAPJOIN_LOCAL: 0 BACKUP_COMMON_JOIN: 0
RUN: Stage-4:MAPRED
RUN: Stage-3:MAPRED
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_mapjoin.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_mapjoin.q.out?rev=1642395&r1=1642394&r2=1642395&view=diff
==============================================================================
Files hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_mapjoin.q.out (original) and hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_mapjoin.q.out Sat Nov 29 03:44:22 2014 differ
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_memcheck.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_memcheck.q.out?rev=1642395&r1=1642394&r2=1642395&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_memcheck.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_memcheck.q.out Sat Nov 29 03:44:22 2014
@@ -38,7 +38,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 2
Map Operator Tree:
TableScan
alias: src2
@@ -60,7 +60,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 2
+ Map 1
Map Operator Tree:
TableScan
alias: src1
@@ -79,7 +79,7 @@ STAGE PLANS:
1 key (type: string)
outputColumnNames: _col0, _col1, _col5, _col6
input vertices:
- 1 Map 1
+ 1 Map 2
Statistics: Num rows: 5 Data size: 38 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col5 (type: string), _col6 (type: string)
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_subquery.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_subquery.q.out?rev=1642395&r1=1642394&r2=1642395&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_subquery.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_subquery.q.out Sat Nov 29 03:44:22 2014
@@ -88,34 +88,30 @@ STAGE PLANS:
input vertices:
0 Map 1
Statistics: Num rows: 275 Data size: 2921 Basic stats: COMPLETE Column stats: NONE
- Select Operator
- expressions: _col0 (type: string)
- outputColumnNames: _col0
- Statistics: Num rows: 275 Data size: 2921 Basic stats: COMPLETE Column stats: NONE
- Map Join Operator
- condition map:
- Inner Join 0 to 1
- condition expressions:
- 0 {_col0}
- 1 {value}
- keys:
- 0 _col0 (type: string)
- 1 key (type: string)
- outputColumnNames: _col0, _col5
- input vertices:
- 1 Map 3
+ Map Join Operator
+ condition map:
+ Inner Join 0 to 1
+ condition expressions:
+ 0 {_col0}
+ 1 {value}
+ keys:
+ 0 _col0 (type: string)
+ 1 key (type: string)
+ outputColumnNames: _col0, _col5
+ input vertices:
+ 1 Map 3
+ Statistics: Num rows: 302 Data size: 3213 Basic stats: COMPLETE Column stats: NONE
+ Select Operator
+ expressions: _col0 (type: string), _col5 (type: string)
+ outputColumnNames: _col0, _col1
Statistics: Num rows: 302 Data size: 3213 Basic stats: COMPLETE Column stats: NONE
- Select Operator
- expressions: _col0 (type: string), _col5 (type: string)
- outputColumnNames: _col0, _col1
+ File Output Operator
+ compressed: false
Statistics: Num rows: 302 Data size: 3213 Basic stats: COMPLETE Column stats: NONE
- File Output Operator
- compressed: false
- Statistics: Num rows: 302 Data size: 3213 Basic stats: COMPLETE Column stats: NONE
- table:
- input format: org.apache.hadoop.mapred.TextInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
Local Work:
Map Reduce Local Work
@@ -338,34 +334,30 @@ STAGE PLANS:
input vertices:
0 Map 1
Statistics: Num rows: 275 Data size: 2921 Basic stats: COMPLETE Column stats: NONE
- Select Operator
- expressions: _col0 (type: string)
- outputColumnNames: _col0
- Statistics: Num rows: 275 Data size: 2921 Basic stats: COMPLETE Column stats: NONE
- Map Join Operator
- condition map:
- Inner Join 0 to 1
- condition expressions:
- 0 {_col0}
- 1 {value}
- keys:
- 0 _col0 (type: string)
- 1 key (type: string)
- outputColumnNames: _col0, _col5
- input vertices:
- 1 Map 3
+ Map Join Operator
+ condition map:
+ Inner Join 0 to 1
+ condition expressions:
+ 0 {_col0}
+ 1 {value}
+ keys:
+ 0 _col0 (type: string)
+ 1 key (type: string)
+ outputColumnNames: _col0, _col5
+ input vertices:
+ 1 Map 3
+ Statistics: Num rows: 302 Data size: 3213 Basic stats: COMPLETE Column stats: NONE
+ Select Operator
+ expressions: _col0 (type: string), _col5 (type: string)
+ outputColumnNames: _col0, _col1
Statistics: Num rows: 302 Data size: 3213 Basic stats: COMPLETE Column stats: NONE
- Select Operator
- expressions: _col0 (type: string), _col5 (type: string)
- outputColumnNames: _col0, _col1
+ File Output Operator
+ compressed: false
Statistics: Num rows: 302 Data size: 3213 Basic stats: COMPLETE Column stats: NONE
- File Output Operator
- compressed: false
- Statistics: Num rows: 302 Data size: 3213 Basic stats: COMPLETE Column stats: NONE
- table:
- input format: org.apache.hadoop.mapred.TextInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
Local Work:
Map Reduce Local Work
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_subquery2.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_subquery2.q.out?rev=1642395&r1=1642394&r2=1642395&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_subquery2.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_subquery2.q.out Sat Nov 29 03:44:22 2014
@@ -92,7 +92,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 2
Map Operator Tree:
TableScan
alias: x
@@ -131,7 +131,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 2
+ Map 1
Map Operator Tree:
TableScan
alias: y
@@ -150,7 +150,7 @@ STAGE PLANS:
1 id (type: int)
outputColumnNames: _col0, _col1, _col5, _col6
input vertices:
- 1 Map 1
+ 1 Map 2
Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
Select Operator
expressions: _col6 (type: int), _col5 (type: string), _col0 (type: int), _col1 (type: string)
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_test_outer.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_test_outer.q.out?rev=1642395&r1=1642394&r2=1642395&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_test_outer.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/mapjoin_test_outer.q.out Sat Nov 29 03:44:22 2014
@@ -253,16 +253,16 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 3
+ Map 1
Map Operator Tree:
TableScan
- alias: src3
- Statistics: Num rows: 9 Data size: 40 Basic stats: COMPLETE Column stats: NONE
+ alias: src1
+ Statistics: Num rows: 25 Data size: 191 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
condition expressions:
- 0 {key} {value}
+ 0 {value}
1 {key} {value}
- 2 {value}
+ 2 {key} {value}
keys:
0 key (type: string)
1 key (type: string)
@@ -272,13 +272,13 @@ STAGE PLANS:
Map 4
Map Operator Tree:
TableScan
- alias: src1
- Statistics: Num rows: 25 Data size: 191 Basic stats: COMPLETE Column stats: NONE
+ alias: src3
+ Statistics: Num rows: 9 Data size: 40 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
condition expressions:
- 0 {value}
+ 0 {key} {value}
1 {key} {value}
- 2 {key} {value}
+ 2 {value}
keys:
0 key (type: string)
1 key (type: string)
@@ -289,10 +289,10 @@ STAGE PLANS:
Stage: Stage-1
Spark
Edges:
- Reducer 2 <- Map 1 (SORT, 3)
+ Reducer 3 <- Map 2 (SORT, 3)
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 2
Map Operator Tree:
TableScan
alias: src2
@@ -311,8 +311,8 @@ STAGE PLANS:
2 key (type: string)
outputColumnNames: _col0, _col1, _col5, _col6, _col10, _col11
input vertices:
- 0 Map 4
- 2 Map 3
+ 0 Map 1
+ 2 Map 4
Statistics: Num rows: 55 Data size: 420 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col5 (type: string), _col6 (type: string), _col10 (type: string), _col11 (type: string)
@@ -324,7 +324,7 @@ STAGE PLANS:
Statistics: Num rows: 55 Data size: 420 Basic stats: COMPLETE Column stats: NONE
Local Work:
Map Reduce Local Work
- Reducer 2
+ Reducer 3
Reduce Operator Tree:
Select Operator
expressions: KEY.reducesinkkey0 (type: string), KEY.reducesinkkey1 (type: string), KEY.reducesinkkey2 (type: string), KEY.reducesinkkey3 (type: string), KEY.reducesinkkey4 (type: string), KEY.reducesinkkey5 (type: string)
@@ -1095,16 +1095,16 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 3
+ Map 1
Map Operator Tree:
TableScan
- alias: src3
- Statistics: Num rows: 9 Data size: 40 Basic stats: COMPLETE Column stats: NONE
+ alias: src1
+ Statistics: Num rows: 25 Data size: 191 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
condition expressions:
- 0 {key} {value}
+ 0 {value}
1 {key} {value}
- 2 {value}
+ 2 {key} {value}
keys:
0 key (type: string)
1 key (type: string)
@@ -1114,13 +1114,13 @@ STAGE PLANS:
Map 4
Map Operator Tree:
TableScan
- alias: src1
- Statistics: Num rows: 25 Data size: 191 Basic stats: COMPLETE Column stats: NONE
+ alias: src3
+ Statistics: Num rows: 9 Data size: 40 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
condition expressions:
- 0 {value}
+ 0 {key} {value}
1 {key} {value}
- 2 {key} {value}
+ 2 {value}
keys:
0 key (type: string)
1 key (type: string)
@@ -1131,10 +1131,10 @@ STAGE PLANS:
Stage: Stage-1
Spark
Edges:
- Reducer 2 <- Map 1 (SORT, 3)
+ Reducer 3 <- Map 2 (SORT, 3)
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 2
Map Operator Tree:
TableScan
alias: src2
@@ -1153,8 +1153,8 @@ STAGE PLANS:
2 key (type: string)
outputColumnNames: _col0, _col1, _col5, _col6, _col10, _col11
input vertices:
- 0 Map 4
- 2 Map 3
+ 0 Map 1
+ 2 Map 4
Statistics: Num rows: 55 Data size: 420 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col5 (type: string), _col6 (type: string), _col10 (type: string), _col11 (type: string)
@@ -1166,7 +1166,7 @@ STAGE PLANS:
Statistics: Num rows: 55 Data size: 420 Basic stats: COMPLETE Column stats: NONE
Local Work:
Map Reduce Local Work
- Reducer 2
+ Reducer 3
Reduce Operator Tree:
Select Operator
expressions: KEY.reducesinkkey0 (type: string), KEY.reducesinkkey1 (type: string), KEY.reducesinkkey2 (type: string), KEY.reducesinkkey3 (type: string), KEY.reducesinkkey4 (type: string), KEY.reducesinkkey5 (type: string)
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/mergejoins.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/mergejoins.q.out?rev=1642395&r1=1642394&r2=1642395&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/mergejoins.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/mergejoins.q.out Sat Nov 29 03:44:22 2014
@@ -52,10 +52,10 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 2
Map Operator Tree:
TableScan
- alias: d
+ alias: b
Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
Filter Operator
predicate: val1 is not null (type: boolean)
@@ -63,9 +63,9 @@ STAGE PLANS:
Spark HashTable Sink Operator
condition expressions:
0 {val1} {val2}
- 1 {val1} {val2}
+ 1 {val2}
2 {val1} {val2}
- 3 {val2}
+ 3 {val1} {val2}
keys:
0 val1 (type: int)
1 val1 (type: int)
@@ -73,27 +73,31 @@ STAGE PLANS:
3 val1 (type: int)
Local Work:
Map Reduce Local Work
- Map 2
+ Map 3
Map Operator Tree:
TableScan
- alias: e
+ alias: c
Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
Filter Operator
- predicate: val2 is not null (type: boolean)
+ predicate: val1 is not null (type: boolean)
Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
Spark HashTable Sink Operator
condition expressions:
- 0 {_col0} {_col1} {_col5} {_col6} {_col10} {_col11} {_col15} {_col16}
- 1 {val1}
+ 0 {val1} {val2}
+ 1 {val1} {val2}
+ 2 {val2}
+ 3 {val1} {val2}
keys:
- 0 _col1 (type: int)
- 1 val2 (type: int)
+ 0 val1 (type: int)
+ 1 val1 (type: int)
+ 2 val1 (type: int)
+ 3 val1 (type: int)
Local Work:
Map Reduce Local Work
- Map 3
+ Map 4
Map Operator Tree:
TableScan
- alias: b
+ alias: d
Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
Filter Operator
predicate: val1 is not null (type: boolean)
@@ -101,9 +105,9 @@ STAGE PLANS:
Spark HashTable Sink Operator
condition expressions:
0 {val1} {val2}
- 1 {val2}
+ 1 {val1} {val2}
2 {val1} {val2}
- 3 {val1} {val2}
+ 3 {val2}
keys:
0 val1 (type: int)
1 val1 (type: int)
@@ -111,25 +115,21 @@ STAGE PLANS:
3 val1 (type: int)
Local Work:
Map Reduce Local Work
- Map 4
+ Map 5
Map Operator Tree:
TableScan
- alias: c
+ alias: e
Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
Filter Operator
- predicate: val1 is not null (type: boolean)
+ predicate: val2 is not null (type: boolean)
Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
Spark HashTable Sink Operator
condition expressions:
- 0 {val1} {val2}
- 1 {val1} {val2}
- 2 {val2}
- 3 {val1} {val2}
+ 0 {_col0} {_col1} {_col5} {_col6} {_col10} {_col11} {_col15} {_col16}
+ 1 {val1}
keys:
- 0 val1 (type: int)
- 1 val1 (type: int)
- 2 val1 (type: int)
- 3 val1 (type: int)
+ 0 _col1 (type: int)
+ 1 val2 (type: int)
Local Work:
Map Reduce Local Work
@@ -137,7 +137,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 5
+ Map 1
Map Operator Tree:
TableScan
alias: a
@@ -162,9 +162,9 @@ STAGE PLANS:
3 val1 (type: int)
outputColumnNames: _col0, _col1, _col5, _col6, _col10, _col11, _col15, _col16
input vertices:
- 1 Map 3
- 2 Map 4
- 3 Map 1
+ 1 Map 2
+ 2 Map 3
+ 3 Map 4
Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
Map Join Operator
condition map:
@@ -177,7 +177,7 @@ STAGE PLANS:
1 val2 (type: int)
outputColumnNames: _col0, _col1, _col5, _col6, _col10, _col11, _col15, _col16, _col20, _col21
input vertices:
- 1 Map 2
+ 1 Map 5
Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
Select Operator
expressions: _col0 (type: int), _col1 (type: int), _col5 (type: int), _col6 (type: int), _col10 (type: int), _col11 (type: int), _col15 (type: int), _col16 (type: int), _col20 (type: int), _col21 (type: int)
@@ -215,7 +215,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 1
+ Map 2
Map Operator Tree:
TableScan
alias: b
@@ -235,7 +235,7 @@ STAGE PLANS:
2 key (type: string)
Local Work:
Map Reduce Local Work
- Map 2
+ Map 3
Map Operator Tree:
TableScan
alias: c
@@ -260,7 +260,7 @@ STAGE PLANS:
Spark
#### A masked pattern was here ####
Vertices:
- Map 3
+ Map 1
Map Operator Tree:
TableScan
alias: a
@@ -283,8 +283,8 @@ STAGE PLANS:
2 key (type: string)
outputColumnNames: _col0, _col1, _col5, _col6, _col10, _col11
input vertices:
- 1 Map 1
- 2 Map 2
+ 1 Map 2
+ 2 Map 3
Statistics: Num rows: 1100 Data size: 11686 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col5 (type: string), _col6 (type: string), _col10 (type: string), _col11 (type: string)