You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hive.apache.org by xu...@apache.org on 2014/10/21 04:45:05 UTC
svn commit: r1633268 [7/7] - in /hive/branches/spark:
itests/src/test/resources/ ql/src/java/org/apache/hadoop/hive/ql/exec/
ql/src/java/org/apache/hadoop/hive/ql/exec/spark/
ql/src/java/org/apache/hadoop/hive/ql/parse/spark/
ql/src/java/org/apache/had...
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/subquery_multiinsert.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/subquery_multiinsert.q.out?rev=1633268&r1=1633267&r2=1633268&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/subquery_multiinsert.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/subquery_multiinsert.q.out Tue Oct 21 02:45:04 2014
@@ -58,50 +58,62 @@ INSERT OVERWRITE TABLE src_5
POSTHOOK: type: QUERY
STAGE DEPENDENCIES:
Stage-2 is a root stage
- Stage-4 depends on stages: Stage-2
- Stage-3 depends on stages: Stage-5, Stage-4
- Stage-0 depends on stages: Stage-3
- Stage-6 depends on stages: Stage-0
+ Stage-3 depends on stages: Stage-2
Stage-1 depends on stages: Stage-3
- Stage-7 depends on stages: Stage-1
- Stage-5 depends on stages: Stage-2
+ Stage-4 depends on stages: Stage-1
+ Stage-0 depends on stages: Stage-3
+ Stage-5 depends on stages: Stage-0
STAGE PLANS:
Stage: Stage-2
Spark
-#### A masked pattern was here ####
- Vertices:
- Map 3
- Map Operator Tree:
- TableScan
- alias: b
- Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
- File Output Operator
- compressed: false
- table:
- input format: org.apache.hadoop.mapred.SequenceFileInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
-
- Stage: Stage-4
- Spark
Edges:
- Reducer 10 <- Map 9 (GROUP, 1)
- Reducer 11 <- Map 1 (GROUP PARTITION-LEVEL SORT, 1), Reducer 10 (GROUP PARTITION-LEVEL SORT, 1)
- Reducer 7 <- Map 6 (GROUP PARTITION-LEVEL SORT, 1), Reducer 11 (GROUP PARTITION-LEVEL SORT, 1)
- Reducer 8 <- Reducer 7 (GROUP SORT, 1)
+ Reducer 2 <- Map 1 (GROUP PARTITION-LEVEL SORT, 1), Reducer 9 (GROUP PARTITION-LEVEL SORT, 1)
+ Reducer 3 <- Map 7 (GROUP PARTITION-LEVEL SORT, 1), Reducer 2 (GROUP PARTITION-LEVEL SORT, 1)
+ Reducer 4 <- Reducer 3 (GROUP SORT, 1)
+ Reducer 5 <- Map 1 (GROUP PARTITION-LEVEL SORT, 1), Map 6 (GROUP PARTITION-LEVEL SORT, 1)
+ Reducer 9 <- Map 8 (GROUP, 1)
#### A masked pattern was here ####
Vertices:
Map 1
Map Operator Tree:
TableScan
+ alias: b
+ Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
sort order:
Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
value expressions: key (type: string), value (type: string)
+ Reduce Output Operator
+ key expressions: key (type: string), value (type: string)
+ sort order: ++
+ Map-reduce partition columns: key (type: string), value (type: string)
+ Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
Map 6
Map Operator Tree:
TableScan
+ alias: a
+ Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
+ Filter Operator
+ predicate: (((key > '9') and key is not null) and value is not null) (type: boolean)
+ Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
+ Select Operator
+ expressions: key (type: string), value (type: string)
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
+ Group By Operator
+ keys: _col0 (type: string), _col1 (type: string)
+ mode: hash
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
+ Reduce Output Operator
+ key expressions: _col0 (type: string), _col1 (type: string)
+ sort order: ++
+ Map-reduce partition columns: _col0 (type: string), _col1 (type: string)
+ Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
+ Map 7
+ Map Operator Tree:
+ TableScan
alias: s1
Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
Filter Operator
@@ -116,7 +128,7 @@ STAGE PLANS:
sort order: +
Map-reduce partition columns: _col0 (type: string)
Statistics: Num rows: 166 Data size: 1763 Basic stats: COMPLETE Column stats: NONE
- Map 9
+ Map 8
Map Operator Tree:
TableScan
alias: s1
@@ -135,29 +147,7 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE
value expressions: _col0 (type: bigint)
- Reducer 10
- Reduce Operator Tree:
- Group By Operator
- aggregations: count(VALUE._col0)
- mode: mergepartial
- outputColumnNames: _col0
- Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE
- Filter Operator
- predicate: (_col0 = 0) (type: boolean)
- Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
- Select Operator
- expressions: 0 (type: bigint)
- outputColumnNames: _col0
- Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
- Group By Operator
- keys: _col0 (type: bigint)
- mode: hash
- outputColumnNames: _col0
- Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
- Reduce Output Operator
- sort order:
- Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
- Reducer 11
+ Reducer 2
Reduce Operator Tree:
Join Operator
condition map:
@@ -173,7 +163,7 @@ STAGE PLANS:
Map-reduce partition columns: _col0 (type: string)
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
value expressions: _col1 (type: string)
- Reducer 7
+ Reducer 3
Reduce Operator Tree:
Join Operator
condition map:
@@ -195,7 +185,7 @@ STAGE PLANS:
sort order: +
Statistics: Num rows: 302 Data size: 3208 Basic stats: COMPLETE Column stats: NONE
value expressions: _col1 (type: string)
- Reducer 8
+ Reducer 4
Reduce Operator Tree:
Select Operator
expressions: KEY.reducesinkkey0 (type: string), VALUE._col0 (type: string)
@@ -209,11 +199,55 @@ STAGE PLANS:
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.src_5
+ Reducer 5
+ Reduce Operator Tree:
+ Join Operator
+ condition map:
+ Left Semi Join 0 to 1
+ condition expressions:
+ 0 {KEY.reducesinkkey0} {KEY.reducesinkkey1}
+ 1
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
+ Select Operator
+ expressions: _col0 (type: string), _col1 (type: string)
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
+ File Output Operator
+ compressed: false
+ Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ name: default.src_4
+ Reducer 9
+ Reduce Operator Tree:
+ Group By Operator
+ aggregations: count(VALUE._col0)
+ mode: mergepartial
+ outputColumnNames: _col0
+ Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE
+ Filter Operator
+ predicate: (_col0 = 0) (type: boolean)
+ Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
+ Select Operator
+ expressions: 0 (type: bigint)
+ outputColumnNames: _col0
+ Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
+ Group By Operator
+ keys: _col0 (type: bigint)
+ mode: hash
+ outputColumnNames: _col0
+ Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
+ Reduce Output Operator
+ sort order:
+ Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
Stage: Stage-3
Dependency Collection
- Stage: Stage-0
+ Stage: Stage-1
Move Operator
tables:
replace: true
@@ -221,12 +255,12 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- name: default.src_4
+ name: default.src_5
- Stage: Stage-6
+ Stage: Stage-4
Stats-Aggr Operator
- Stage: Stage-1
+ Stage: Stage-0
Move Operator
tables:
replace: true
@@ -234,69 +268,10 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- name: default.src_5
-
- Stage: Stage-7
- Stats-Aggr Operator
+ name: default.src_4
Stage: Stage-5
- Spark
- Edges:
- Reducer 5 <- Map 2 (GROUP PARTITION-LEVEL SORT, 1), Map 4 (GROUP PARTITION-LEVEL SORT, 1)
-#### A masked pattern was here ####
- Vertices:
- Map 2
- Map Operator Tree:
- TableScan
- Reduce Output Operator
- key expressions: key (type: string), value (type: string)
- sort order: ++
- Map-reduce partition columns: key (type: string), value (type: string)
- Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
- Map 4
- Map Operator Tree:
- TableScan
- alias: a
- Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
- Filter Operator
- predicate: (((key > '9') and key is not null) and value is not null) (type: boolean)
- Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
- Select Operator
- expressions: key (type: string), value (type: string)
- outputColumnNames: _col0, _col1
- Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
- Group By Operator
- keys: _col0 (type: string), _col1 (type: string)
- mode: hash
- outputColumnNames: _col0, _col1
- Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
- Reduce Output Operator
- key expressions: _col0 (type: string), _col1 (type: string)
- sort order: ++
- Map-reduce partition columns: _col0 (type: string), _col1 (type: string)
- Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
- Reducer 5
- Reduce Operator Tree:
- Join Operator
- condition map:
- Left Semi Join 0 to 1
- condition expressions:
- 0 {KEY.reducesinkkey0} {KEY.reducesinkkey1}
- 1
- outputColumnNames: _col0, _col1
- Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- Select Operator
- expressions: _col0 (type: string), _col1 (type: string)
- outputColumnNames: _col0, _col1
- Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- File Output Operator
- compressed: false
- Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- table:
- input format: org.apache.hadoop.mapred.TextInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- name: default.src_4
+ Stats-Aggr Operator
PREHOOK: query: from src b
INSERT OVERWRITE TABLE src_4
@@ -337,13 +312,11 @@ POSTHOOK: Lineage: src_4.value EXPRESSIO
POSTHOOK: Lineage: src_5.key EXPRESSION [(src)b.FieldSchema(name:key, type:string, comment:default), ]
POSTHOOK: Lineage: src_5.value EXPRESSION [(src)b.FieldSchema(name:value, type:string, comment:default), ]
RUN: Stage-2:MAPRED
-RUN: Stage-4:MAPRED
-RUN: Stage-5:MAPRED
RUN: Stage-3:DEPENDENCY_COLLECTION
-RUN: Stage-0:MOVE
RUN: Stage-1:MOVE
-RUN: Stage-6:STATS
-RUN: Stage-7:STATS
+RUN: Stage-0:MOVE
+RUN: Stage-4:STATS
+RUN: Stage-5:STATS
PREHOOK: query: select * from src_4
PREHOOK: type: QUERY
PREHOOK: Input: default@src_4
@@ -520,50 +493,62 @@ INSERT OVERWRITE TABLE src_5
POSTHOOK: type: QUERY
STAGE DEPENDENCIES:
Stage-2 is a root stage
- Stage-4 depends on stages: Stage-2
- Stage-3 depends on stages: Stage-5, Stage-4
- Stage-0 depends on stages: Stage-3
- Stage-6 depends on stages: Stage-0
+ Stage-3 depends on stages: Stage-2
Stage-1 depends on stages: Stage-3
- Stage-7 depends on stages: Stage-1
- Stage-5 depends on stages: Stage-2
+ Stage-4 depends on stages: Stage-1
+ Stage-0 depends on stages: Stage-3
+ Stage-5 depends on stages: Stage-0
STAGE PLANS:
Stage: Stage-2
Spark
-#### A masked pattern was here ####
- Vertices:
- Map 3
- Map Operator Tree:
- TableScan
- alias: b
- Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
- File Output Operator
- compressed: false
- table:
- input format: org.apache.hadoop.mapred.SequenceFileInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
-
- Stage: Stage-4
- Spark
Edges:
- Reducer 10 <- Map 9 (GROUP, 1)
- Reducer 11 <- Map 1 (GROUP PARTITION-LEVEL SORT, 1), Reducer 10 (GROUP PARTITION-LEVEL SORT, 1)
- Reducer 7 <- Map 6 (GROUP PARTITION-LEVEL SORT, 1), Reducer 11 (GROUP PARTITION-LEVEL SORT, 1)
- Reducer 8 <- Reducer 7 (GROUP SORT, 1)
+ Reducer 2 <- Map 1 (GROUP PARTITION-LEVEL SORT, 1), Reducer 9 (GROUP PARTITION-LEVEL SORT, 1)
+ Reducer 3 <- Map 7 (GROUP PARTITION-LEVEL SORT, 1), Reducer 2 (GROUP PARTITION-LEVEL SORT, 1)
+ Reducer 4 <- Reducer 3 (GROUP SORT, 1)
+ Reducer 5 <- Map 1 (GROUP PARTITION-LEVEL SORT, 1), Map 6 (GROUP PARTITION-LEVEL SORT, 1)
+ Reducer 9 <- Map 8 (GROUP, 1)
#### A masked pattern was here ####
Vertices:
Map 1
Map Operator Tree:
TableScan
+ alias: b
+ Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
sort order:
Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
value expressions: key (type: string), value (type: string)
+ Reduce Output Operator
+ key expressions: key (type: string), value (type: string)
+ sort order: ++
+ Map-reduce partition columns: key (type: string), value (type: string)
+ Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
Map 6
Map Operator Tree:
TableScan
+ alias: a
+ Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
+ Filter Operator
+ predicate: (((key > '9') and key is not null) and value is not null) (type: boolean)
+ Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
+ Select Operator
+ expressions: key (type: string), value (type: string)
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
+ Group By Operator
+ keys: _col0 (type: string), _col1 (type: string)
+ mode: hash
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
+ Reduce Output Operator
+ key expressions: _col0 (type: string), _col1 (type: string)
+ sort order: ++
+ Map-reduce partition columns: _col0 (type: string), _col1 (type: string)
+ Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
+ Map 7
+ Map Operator Tree:
+ TableScan
alias: s1
Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
Filter Operator
@@ -578,7 +563,7 @@ STAGE PLANS:
sort order: +
Map-reduce partition columns: _col0 (type: string)
Statistics: Num rows: 166 Data size: 1763 Basic stats: COMPLETE Column stats: NONE
- Map 9
+ Map 8
Map Operator Tree:
TableScan
alias: s1
@@ -597,29 +582,7 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE
value expressions: _col0 (type: bigint)
- Reducer 10
- Reduce Operator Tree:
- Group By Operator
- aggregations: count(VALUE._col0)
- mode: mergepartial
- outputColumnNames: _col0
- Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE
- Filter Operator
- predicate: (_col0 = 0) (type: boolean)
- Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
- Select Operator
- expressions: 0 (type: bigint)
- outputColumnNames: _col0
- Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
- Group By Operator
- keys: _col0 (type: bigint)
- mode: hash
- outputColumnNames: _col0
- Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
- Reduce Output Operator
- sort order:
- Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
- Reducer 11
+ Reducer 2
Reduce Operator Tree:
Join Operator
condition map:
@@ -635,7 +598,7 @@ STAGE PLANS:
Map-reduce partition columns: _col0 (type: string)
Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
value expressions: _col1 (type: string)
- Reducer 7
+ Reducer 3
Reduce Operator Tree:
Join Operator
condition map:
@@ -657,7 +620,7 @@ STAGE PLANS:
sort order: +
Statistics: Num rows: 302 Data size: 3208 Basic stats: COMPLETE Column stats: NONE
value expressions: _col1 (type: string)
- Reducer 8
+ Reducer 4
Reduce Operator Tree:
Select Operator
expressions: KEY.reducesinkkey0 (type: string), VALUE._col0 (type: string)
@@ -671,11 +634,55 @@ STAGE PLANS:
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.src_5
+ Reducer 5
+ Reduce Operator Tree:
+ Join Operator
+ condition map:
+ Left Semi Join 0 to 1
+ condition expressions:
+ 0 {KEY.reducesinkkey0} {KEY.reducesinkkey1}
+ 1
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
+ Select Operator
+ expressions: _col0 (type: string), _col1 (type: string)
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
+ File Output Operator
+ compressed: false
+ Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ name: default.src_4
+ Reducer 9
+ Reduce Operator Tree:
+ Group By Operator
+ aggregations: count(VALUE._col0)
+ mode: mergepartial
+ outputColumnNames: _col0
+ Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE
+ Filter Operator
+ predicate: (_col0 = 0) (type: boolean)
+ Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
+ Select Operator
+ expressions: 0 (type: bigint)
+ outputColumnNames: _col0
+ Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
+ Group By Operator
+ keys: _col0 (type: bigint)
+ mode: hash
+ outputColumnNames: _col0
+ Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
+ Reduce Output Operator
+ sort order:
+ Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
Stage: Stage-3
Dependency Collection
- Stage: Stage-0
+ Stage: Stage-1
Move Operator
tables:
replace: true
@@ -683,12 +690,12 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- name: default.src_4
+ name: default.src_5
- Stage: Stage-6
+ Stage: Stage-4
Stats-Aggr Operator
- Stage: Stage-1
+ Stage: Stage-0
Move Operator
tables:
replace: true
@@ -696,69 +703,10 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- name: default.src_5
-
- Stage: Stage-7
- Stats-Aggr Operator
+ name: default.src_4
Stage: Stage-5
- Spark
- Edges:
- Reducer 5 <- Map 2 (GROUP PARTITION-LEVEL SORT, 1), Map 4 (GROUP PARTITION-LEVEL SORT, 1)
-#### A masked pattern was here ####
- Vertices:
- Map 2
- Map Operator Tree:
- TableScan
- Reduce Output Operator
- key expressions: key (type: string), value (type: string)
- sort order: ++
- Map-reduce partition columns: key (type: string), value (type: string)
- Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
- Map 4
- Map Operator Tree:
- TableScan
- alias: a
- Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE Column stats: NONE
- Filter Operator
- predicate: (((key > '9') and key is not null) and value is not null) (type: boolean)
- Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
- Select Operator
- expressions: key (type: string), value (type: string)
- outputColumnNames: _col0, _col1
- Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
- Group By Operator
- keys: _col0 (type: string), _col1 (type: string)
- mode: hash
- outputColumnNames: _col0, _col1
- Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
- Reduce Output Operator
- key expressions: _col0 (type: string), _col1 (type: string)
- sort order: ++
- Map-reduce partition columns: _col0 (type: string), _col1 (type: string)
- Statistics: Num rows: 42 Data size: 446 Basic stats: COMPLETE Column stats: NONE
- Reducer 5
- Reduce Operator Tree:
- Join Operator
- condition map:
- Left Semi Join 0 to 1
- condition expressions:
- 0 {KEY.reducesinkkey0} {KEY.reducesinkkey1}
- 1
- outputColumnNames: _col0, _col1
- Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- Select Operator
- expressions: _col0 (type: string), _col1 (type: string)
- outputColumnNames: _col0, _col1
- Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- File Output Operator
- compressed: false
- Statistics: Num rows: 550 Data size: 5843 Basic stats: COMPLETE Column stats: NONE
- table:
- input format: org.apache.hadoop.mapred.TextInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- name: default.src_4
+ Stats-Aggr Operator
PREHOOK: query: from src b
INSERT OVERWRITE TABLE src_4
@@ -799,13 +747,11 @@ POSTHOOK: Lineage: src_4.value EXPRESSIO
POSTHOOK: Lineage: src_5.key EXPRESSION [(src)b.FieldSchema(name:key, type:string, comment:default), ]
POSTHOOK: Lineage: src_5.value EXPRESSION [(src)b.FieldSchema(name:value, type:string, comment:default), ]
RUN: Stage-2:MAPRED
-RUN: Stage-4:MAPRED
-RUN: Stage-5:MAPRED
RUN: Stage-3:DEPENDENCY_COLLECTION
-RUN: Stage-0:MOVE
RUN: Stage-1:MOVE
-RUN: Stage-6:STATS
-RUN: Stage-7:STATS
+RUN: Stage-0:MOVE
+RUN: Stage-4:STATS
+RUN: Stage-5:STATS
PREHOOK: query: select * from src_4
PREHOOK: type: QUERY
PREHOOK: Input: default@src_4
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/union18.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/union18.q.out?rev=1633268&r1=1633267&r2=1633268&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/union18.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/union18.q.out Tue Oct 21 02:45:04 2014
@@ -34,23 +34,21 @@ INSERT OVERWRITE TABLE DEST2 SELECT unio
POSTHOOK: type: QUERY
STAGE DEPENDENCIES:
Stage-2 is a root stage
- Stage-4 depends on stages: Stage-2
- Stage-3 depends on stages: Stage-4, Stage-5
+ Stage-3 depends on stages: Stage-2
Stage-0 depends on stages: Stage-3
- Stage-6 depends on stages: Stage-0
+ Stage-4 depends on stages: Stage-0
Stage-1 depends on stages: Stage-3
- Stage-7 depends on stages: Stage-1
- Stage-5 depends on stages: Stage-2
+ Stage-5 depends on stages: Stage-1
STAGE PLANS:
Stage: Stage-2
Spark
Edges:
- Reducer 4 <- Map 3 (GROUP, 1)
- Union 5 <- Map 6 (NONE, 0), Reducer 4 (NONE, 0)
+ Reducer 2 <- Map 1 (GROUP, 1)
+ Union 3 <- Map 4 (NONE, 0), Reducer 2 (NONE, 0)
#### A masked pattern was here ####
Vertices:
- Map 3
+ Map 1
Map Operator Tree:
TableScan
alias: s1
@@ -66,20 +64,34 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: bigint)
- Map 6
+ Map 4
Map Operator Tree:
TableScan
alias: s2
Select Operator
expressions: key (type: string), value (type: string)
outputColumnNames: _col0, _col1
- File Output Operator
- compressed: false
- table:
- input format: org.apache.hadoop.mapred.SequenceFileInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
- Reducer 4
+ Select Operator
+ expressions: _col0 (type: string), _col1 (type: string)
+ outputColumnNames: _col0, _col1
+ File Output Operator
+ compressed: false
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ name: default.dest1
+ Select Operator
+ expressions: _col0 (type: string), _col1 (type: string), _col1 (type: string)
+ outputColumnNames: _col0, _col1, _col2
+ File Output Operator
+ compressed: false
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ name: default.dest2
+ Reducer 2
Reduce Operator Tree:
Group By Operator
aggregations: count(VALUE._col0)
@@ -88,34 +100,28 @@ STAGE PLANS:
Select Operator
expressions: 'tst1' (type: string), UDFToString(_col0) (type: string)
outputColumnNames: _col0, _col1
- File Output Operator
- compressed: false
- table:
- input format: org.apache.hadoop.mapred.SequenceFileInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
- Union 5
- Vertex: Union 5
-
- Stage: Stage-4
- Spark
-#### A masked pattern was here ####
- Vertices:
- Map 1
- Map Operator Tree:
- TableScan
Select Operator
expressions: _col0 (type: string), _col1 (type: string)
outputColumnNames: _col0, _col1
- Statistics: Num rows: 501 Data size: 136272 Basic stats: COMPLETE Column stats: PARTIAL
File Output Operator
compressed: false
- Statistics: Num rows: 501 Data size: 136272 Basic stats: COMPLETE Column stats: PARTIAL
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.dest1
+ Select Operator
+ expressions: _col0 (type: string), _col1 (type: string), _col1 (type: string)
+ outputColumnNames: _col0, _col1, _col2
+ File Output Operator
+ compressed: false
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ name: default.dest2
+ Union 3
+ Vertex: Union 3
Stage: Stage-3
Dependency Collection
@@ -130,7 +136,7 @@ STAGE PLANS:
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.dest1
- Stage: Stage-6
+ Stage: Stage-4
Stats-Aggr Operator
Stage: Stage-1
@@ -143,28 +149,8 @@ STAGE PLANS:
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.dest2
- Stage: Stage-7
- Stats-Aggr Operator
-
Stage: Stage-5
- Spark
-#### A masked pattern was here ####
- Vertices:
- Map 2
- Map Operator Tree:
- TableScan
- Select Operator
- expressions: _col0 (type: string), _col1 (type: string), _col1 (type: string)
- outputColumnNames: _col0, _col1, _col2
- Statistics: Num rows: 501 Data size: 228456 Basic stats: COMPLETE Column stats: PARTIAL
- File Output Operator
- compressed: false
- Statistics: Num rows: 501 Data size: 228456 Basic stats: COMPLETE Column stats: PARTIAL
- table:
- input format: org.apache.hadoop.mapred.TextInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- name: default.dest2
+ Stats-Aggr Operator
PREHOOK: query: FROM (select 'tst1' as key, cast(count(1) as string) as value from src s1
UNION ALL
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/union19.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/union19.q.out?rev=1633268&r1=1633267&r2=1633268&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/union19.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/union19.q.out Tue Oct 21 02:45:04 2014
@@ -34,23 +34,22 @@ INSERT OVERWRITE TABLE DEST2 SELECT unio
POSTHOOK: type: QUERY
STAGE DEPENDENCIES:
Stage-2 is a root stage
- Stage-4 depends on stages: Stage-2
- Stage-3 depends on stages: Stage-4, Stage-5
+ Stage-3 depends on stages: Stage-2
Stage-0 depends on stages: Stage-3
- Stage-6 depends on stages: Stage-0
+ Stage-4 depends on stages: Stage-0
Stage-1 depends on stages: Stage-3
- Stage-7 depends on stages: Stage-1
- Stage-5 depends on stages: Stage-2
+ Stage-5 depends on stages: Stage-1
STAGE PLANS:
Stage: Stage-2
Spark
Edges:
- Reducer 4 <- Map 3 (GROUP, 1)
- Union 5 <- Map 6 (NONE, 0), Reducer 4 (NONE, 0)
+ Reducer 2 <- Map 1 (GROUP, 1)
+ Reducer 4 <- Union 3 (GROUP, 1)
+ Union 3 <- Map 5 (NONE, 0), Reducer 2 (NONE, 0)
#### A masked pattern was here ####
Vertices:
- Map 3
+ Map 1
Map Operator Tree:
TableScan
alias: s1
@@ -66,20 +65,37 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: COMPLETE
value expressions: _col0 (type: bigint)
- Map 6
+ Map 5
Map Operator Tree:
TableScan
alias: s2
Select Operator
expressions: key (type: string), value (type: string)
outputColumnNames: _col0, _col1
- File Output Operator
- compressed: false
- table:
- input format: org.apache.hadoop.mapred.SequenceFileInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
- Reducer 4
+ Select Operator
+ expressions: _col0 (type: string), _col1 (type: string)
+ outputColumnNames: _col0, _col1
+ Group By Operator
+ aggregations: count(_col1)
+ keys: _col0 (type: string)
+ mode: hash
+ outputColumnNames: _col0, _col1
+ Reduce Output Operator
+ key expressions: _col0 (type: string)
+ sort order: +
+ Map-reduce partition columns: _col0 (type: string)
+ value expressions: _col1 (type: bigint)
+ Select Operator
+ expressions: _col0 (type: string), _col1 (type: string), _col1 (type: string)
+ outputColumnNames: _col0, _col1, _col2
+ File Output Operator
+ compressed: false
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ name: default.dest2
+ Reducer 2
Reduce Operator Tree:
Group By Operator
aggregations: count(VALUE._col0)
@@ -88,41 +104,30 @@ STAGE PLANS:
Select Operator
expressions: 'tst1' (type: string), UDFToString(_col0) (type: string)
outputColumnNames: _col0, _col1
- File Output Operator
- compressed: false
- table:
- input format: org.apache.hadoop.mapred.SequenceFileInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
- Union 5
- Vertex: Union 5
-
- Stage: Stage-4
- Spark
- Edges:
- Reducer 7 <- Map 1 (GROUP, 1)
-#### A masked pattern was here ####
- Vertices:
- Map 1
- Map Operator Tree:
- TableScan
Select Operator
expressions: _col0 (type: string), _col1 (type: string)
outputColumnNames: _col0, _col1
- Statistics: Num rows: 501 Data size: 5584 Basic stats: COMPLETE Column stats: PARTIAL
Group By Operator
aggregations: count(_col1)
keys: _col0 (type: string)
mode: hash
outputColumnNames: _col0, _col1
- Statistics: Num rows: 250 Data size: 24000 Basic stats: COMPLETE Column stats: PARTIAL
Reduce Output Operator
key expressions: _col0 (type: string)
sort order: +
Map-reduce partition columns: _col0 (type: string)
- Statistics: Num rows: 250 Data size: 24000 Basic stats: COMPLETE Column stats: PARTIAL
value expressions: _col1 (type: bigint)
- Reducer 7
+ Select Operator
+ expressions: _col0 (type: string), _col1 (type: string), _col1 (type: string)
+ outputColumnNames: _col0, _col1, _col2
+ File Output Operator
+ compressed: false
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ name: default.dest2
+ Reducer 4
Reduce Operator Tree:
Group By Operator
aggregations: count(VALUE._col0)
@@ -142,6 +147,8 @@ STAGE PLANS:
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.dest1
+ Union 3
+ Vertex: Union 3
Stage: Stage-3
Dependency Collection
@@ -156,7 +163,7 @@ STAGE PLANS:
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.dest1
- Stage: Stage-6
+ Stage: Stage-4
Stats-Aggr Operator
Stage: Stage-1
@@ -169,28 +176,8 @@ STAGE PLANS:
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.dest2
- Stage: Stage-7
- Stats-Aggr Operator
-
Stage: Stage-5
- Spark
-#### A masked pattern was here ####
- Vertices:
- Map 2
- Map Operator Tree:
- TableScan
- Select Operator
- expressions: _col0 (type: string), _col1 (type: string), _col1 (type: string)
- outputColumnNames: _col0, _col1, _col2
- Statistics: Num rows: 501 Data size: 228456 Basic stats: COMPLETE Column stats: PARTIAL
- File Output Operator
- compressed: false
- Statistics: Num rows: 501 Data size: 228456 Basic stats: COMPLETE Column stats: PARTIAL
- table:
- input format: org.apache.hadoop.mapred.TextInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- name: default.dest2
+ Stats-Aggr Operator
PREHOOK: query: FROM (select 'tst1' as key, cast(count(1) as string) as value from src s1
UNION ALL
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/union_remove_6.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/union_remove_6.q.out?rev=1633268&r1=1633267&r2=1633268&view=diff
==============================================================================
--- hive/branches/spark/ql/src/test/results/clientpositive/spark/union_remove_6.q.out (original)
+++ hive/branches/spark/ql/src/test/results/clientpositive/spark/union_remove_6.q.out Tue Oct 21 02:45:04 2014
@@ -64,22 +64,20 @@ insert overwrite table outputTbl2 select
POSTHOOK: type: QUERY
STAGE DEPENDENCIES:
Stage-2 is a root stage
- Stage-4 depends on stages: Stage-2
- Stage-3 depends on stages: Stage-4, Stage-5
+ Stage-3 depends on stages: Stage-2
Stage-0 depends on stages: Stage-3
Stage-1 depends on stages: Stage-3
- Stage-5 depends on stages: Stage-2
STAGE PLANS:
Stage: Stage-2
Spark
Edges:
- Reducer 4 <- Map 3 (GROUP, 1)
- Reducer 7 <- Map 6 (GROUP, 1)
- Union 5 <- Reducer 4 (NONE, 0), Reducer 7 (NONE, 0)
+ Reducer 2 <- Map 1 (GROUP, 1)
+ Reducer 5 <- Map 4 (GROUP, 1)
+ Union 3 <- Reducer 2 (NONE, 0), Reducer 5 (NONE, 0)
#### A masked pattern was here ####
Vertices:
- Map 3
+ Map 1
Map Operator Tree:
TableScan
alias: inputtbl1
@@ -100,7 +98,7 @@ STAGE PLANS:
Map-reduce partition columns: _col0 (type: string)
Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
value expressions: _col1 (type: bigint)
- Map 6
+ Map 4
Map Operator Tree:
TableScan
alias: inputtbl1
@@ -121,7 +119,7 @@ STAGE PLANS:
Map-reduce partition columns: _col0 (type: string)
Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
value expressions: _col1 (type: bigint)
- Reducer 4
+ Reducer 2
Reduce Operator Tree:
Group By Operator
aggregations: count(VALUE._col0)
@@ -131,13 +129,27 @@ STAGE PLANS:
Select Operator
expressions: _col0 (type: string), _col1 (type: bigint)
outputColumnNames: _col0, _col1
- File Output Operator
- compressed: false
- table:
- input format: org.apache.hadoop.mapred.SequenceFileInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
- Reducer 7
+ Select Operator
+ expressions: _col0 (type: string), _col1 (type: bigint)
+ outputColumnNames: _col0, _col1
+ File Output Operator
+ compressed: false
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ name: default.outputtbl1
+ Select Operator
+ expressions: _col0 (type: string), _col1 (type: bigint)
+ outputColumnNames: _col0, _col1
+ File Output Operator
+ compressed: false
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ name: default.outputtbl2
+ Reducer 5
Reduce Operator Tree:
Group By Operator
aggregations: count(VALUE._col0)
@@ -147,34 +159,28 @@ STAGE PLANS:
Select Operator
expressions: _col0 (type: string), _col1 (type: bigint)
outputColumnNames: _col0, _col1
- File Output Operator
- compressed: false
- table:
- input format: org.apache.hadoop.mapred.SequenceFileInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
- Union 5
- Vertex: Union 5
-
- Stage: Stage-4
- Spark
-#### A masked pattern was here ####
- Vertices:
- Map 1
- Map Operator Tree:
- TableScan
Select Operator
expressions: _col0 (type: string), _col1 (type: bigint)
outputColumnNames: _col0, _col1
- Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
File Output Operator
compressed: false
- Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.outputtbl1
+ Select Operator
+ expressions: _col0 (type: string), _col1 (type: bigint)
+ outputColumnNames: _col0, _col1
+ File Output Operator
+ compressed: false
+ table:
+ input format: org.apache.hadoop.mapred.TextInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ name: default.outputtbl2
+ Union 3
+ Vertex: Union 3
Stage: Stage-3
Dependency Collection
@@ -199,26 +205,6 @@ STAGE PLANS:
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
name: default.outputtbl2
- Stage: Stage-5
- Spark
-#### A masked pattern was here ####
- Vertices:
- Map 2
- Map Operator Tree:
- TableScan
- Select Operator
- expressions: _col0 (type: string), _col1 (type: bigint)
- outputColumnNames: _col0, _col1
- Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
- File Output Operator
- compressed: false
- Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column stats: NONE
- table:
- input format: org.apache.hadoop.mapred.TextInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- name: default.outputtbl2
-
PREHOOK: query: FROM (
SELECT key, count(1) as values from inputTbl1 group by key
UNION ALL
Modified: hive/branches/spark/ql/src/test/results/clientpositive/spark/vectorized_ptf.q.out
URL: http://svn.apache.org/viewvc/hive/branches/spark/ql/src/test/results/clientpositive/spark/vectorized_ptf.q.out?rev=1633268&r1=1633267&r2=1633268&view=diff
==============================================================================
Files hive/branches/spark/ql/src/test/results/clientpositive/spark/vectorized_ptf.q.out (original) and hive/branches/spark/ql/src/test/results/clientpositive/spark/vectorized_ptf.q.out Tue Oct 21 02:45:04 2014 differ