You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hive.apache.org by xu...@apache.org on 2015/10/28 21:53:31 UTC
[2/2] hive git commit: HIVE-12284: Merge master to Spark branch
10/28/2015 [Spark Branch] update some test result (Reviewed by Chao)
HIVE-12284: Merge master to Spark branch 10/28/2015 [Spark Branch] update some test result (Reviewed by Chao)
Project: http://git-wip-us.apache.org/repos/asf/hive/repo
Commit: http://git-wip-us.apache.org/repos/asf/hive/commit/fd119291
Tree: http://git-wip-us.apache.org/repos/asf/hive/tree/fd119291
Diff: http://git-wip-us.apache.org/repos/asf/hive/diff/fd119291
Branch: refs/heads/spark
Commit: fd119291482f5fa75a97dda0bf4282b6bd73a970
Parents: c9073aa
Author: Xuefu Zhang <xz...@Cloudera.com>
Authored: Wed Oct 28 13:53:20 2015 -0700
Committer: Xuefu Zhang <xz...@Cloudera.com>
Committed: Wed Oct 28 13:53:20 2015 -0700
----------------------------------------------------------------------
.../spark/vector_inner_join.q.out | 36 ++--
.../spark/vector_outer_join0.q.out | 8 +-
.../spark/vector_outer_join1.q.out | 56 +++---
.../spark/vector_outer_join2.q.out | 24 +--
.../spark/vector_outer_join3.q.out | 72 ++++----
.../spark/vector_outer_join4.q.out | 56 +++---
.../spark/vector_outer_join5.q.out | 176 +++++++++----------
7 files changed, 214 insertions(+), 214 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hive/blob/fd119291/ql/src/test/results/clientpositive/spark/vector_inner_join.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/spark/vector_inner_join.q.out b/ql/src/test/results/clientpositive/spark/vector_inner_join.q.out
index bf7090b..e63e1f1 100644
--- a/ql/src/test/results/clientpositive/spark/vector_inner_join.q.out
+++ b/ql/src/test/results/clientpositive/spark/vector_inner_join.q.out
@@ -60,9 +60,9 @@ STAGE PLANS:
keys:
0 c (type: int)
1 a (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -97,9 +97,9 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
@@ -155,9 +155,9 @@ STAGE PLANS:
keys:
0 _col0 (type: int)
1 _col0 (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -192,9 +192,9 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
@@ -277,9 +277,9 @@ STAGE PLANS:
keys:
0 c (type: int)
1 a (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -314,9 +314,9 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
@@ -363,9 +363,9 @@ STAGE PLANS:
keys:
0 c (type: int)
1 a (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -400,9 +400,9 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
@@ -449,9 +449,9 @@ STAGE PLANS:
keys:
0 c (type: int)
1 a (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -486,9 +486,9 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
@@ -535,9 +535,9 @@ STAGE PLANS:
keys:
0 c (type: int)
1 a (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -572,9 +572,9 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
@@ -621,9 +621,9 @@ STAGE PLANS:
keys:
0 c (type: int)
1 a (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -658,9 +658,9 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
@@ -707,9 +707,9 @@ STAGE PLANS:
keys:
0 a (type: int)
1 c (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -744,9 +744,9 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
@@ -793,9 +793,9 @@ STAGE PLANS:
keys:
0 a (type: int)
1 c (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -830,9 +830,9 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
http://git-wip-us.apache.org/repos/asf/hive/blob/fd119291/ql/src/test/results/clientpositive/spark/vector_outer_join0.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/spark/vector_outer_join0.q.out b/ql/src/test/results/clientpositive/spark/vector_outer_join0.q.out
index cc66db5..22c1b6a 100644
--- a/ql/src/test/results/clientpositive/spark/vector_outer_join0.q.out
+++ b/ql/src/test/results/clientpositive/spark/vector_outer_join0.q.out
@@ -87,9 +87,9 @@ STAGE PLANS:
keys:
0 a (type: int)
1 c (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -121,9 +121,9 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
@@ -176,9 +176,9 @@ STAGE PLANS:
keys:
0 a (type: int)
1 c (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -210,9 +210,9 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
http://git-wip-us.apache.org/repos/asf/hive/blob/fd119291/ql/src/test/results/clientpositive/spark/vector_outer_join1.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/spark/vector_outer_join1.q.out b/ql/src/test/results/clientpositive/spark/vector_outer_join1.q.out
index cfc4753..25d4d31 100644
--- a/ql/src/test/results/clientpositive/spark/vector_outer_join1.q.out
+++ b/ql/src/test/results/clientpositive/spark/vector_outer_join1.q.out
@@ -182,18 +182,18 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 15 Data size: 3915 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 15 Data size: 4211 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: ctinyint (type: tinyint), csmallint (type: smallint), cint (type: int), cbigint (type: bigint), cfloat (type: float), cdouble (type: double), cstring1 (type: string), cstring2 (type: string), ctimestamp1 (type: timestamp), ctimestamp2 (type: timestamp), cboolean1 (type: boolean), cboolean2 (type: boolean)
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11
- Statistics: Num rows: 15 Data size: 3915 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 15 Data size: 4211 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col2 (type: int)
1 _col2 (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -203,11 +203,11 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 15 Data size: 3915 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 15 Data size: 4211 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: ctinyint (type: tinyint), csmallint (type: smallint), cint (type: int), cbigint (type: bigint), cfloat (type: float), cdouble (type: double), cstring1 (type: string), cstring2 (type: string), ctimestamp1 (type: timestamp), ctimestamp2 (type: timestamp), cboolean1 (type: boolean), cboolean2 (type: boolean)
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11
- Statistics: Num rows: 15 Data size: 3915 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 15 Data size: 4211 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -217,17 +217,17 @@ STAGE PLANS:
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13, _col14, _col15, _col16, _col17, _col18, _col19, _col20, _col21, _col22, _col23
input vertices:
1 Map 2
- Statistics: Num rows: 16 Data size: 4306 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 16 Data size: 4632 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compressed: false
- Statistics: Num rows: 16 Data size: 4306 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 16 Data size: 4632 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
@@ -298,18 +298,18 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 15 Data size: 3915 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 15 Data size: 4211 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: ctinyint (type: tinyint)
outputColumnNames: _col0
- Statistics: Num rows: 15 Data size: 3915 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 15 Data size: 4211 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col0 (type: tinyint)
1 _col0 (type: tinyint)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -319,11 +319,11 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 15 Data size: 3915 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 15 Data size: 4211 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: ctinyint (type: tinyint)
outputColumnNames: _col0
- Statistics: Num rows: 15 Data size: 3915 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 15 Data size: 4211 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -333,17 +333,17 @@ STAGE PLANS:
outputColumnNames: _col0
input vertices:
1 Map 2
- Statistics: Num rows: 16 Data size: 4306 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 16 Data size: 4632 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compressed: false
- Statistics: Num rows: 16 Data size: 4306 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 16 Data size: 4632 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
@@ -506,34 +506,34 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 15 Data size: 3915 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 15 Data size: 4211 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: cint (type: int)
outputColumnNames: _col0
- Statistics: Num rows: 15 Data size: 3915 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 15 Data size: 4211 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col1 (type: int)
1 _col0 (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Map 4
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 15 Data size: 3915 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 15 Data size: 4211 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: ctinyint (type: tinyint)
outputColumnNames: _col0
- Statistics: Num rows: 15 Data size: 3915 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 15 Data size: 4211 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col0 (type: tinyint)
1 _col0 (type: tinyint)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -545,11 +545,11 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 15 Data size: 3915 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 15 Data size: 4211 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: ctinyint (type: tinyint), cint (type: int)
outputColumnNames: _col0, _col1
- Statistics: Num rows: 15 Data size: 3915 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 15 Data size: 4211 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -559,7 +559,7 @@ STAGE PLANS:
outputColumnNames: _col0
input vertices:
1 Map 3
- Statistics: Num rows: 16 Data size: 4306 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 16 Data size: 4632 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -569,7 +569,7 @@ STAGE PLANS:
outputColumnNames: _col0
input vertices:
1 Map 4
- Statistics: Num rows: 17 Data size: 4736 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 17 Data size: 5095 Basic stats: COMPLETE Column stats: NONE
Group By Operator
aggregations: count(), sum(_col0)
mode: hash
@@ -579,10 +579,11 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 16 Basic stats: COMPLETE Column stats: NONE
value expressions: _col0 (type: bigint), _col1 (type: bigint)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Reducer 2
+ Execution mode: vectorized
Reduce Operator Tree:
Group By Operator
aggregations: count(VALUE._col0), sum(VALUE._col1)
@@ -596,7 +597,6 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
http://git-wip-us.apache.org/repos/asf/hive/blob/fd119291/ql/src/test/results/clientpositive/spark/vector_outer_join2.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/spark/vector_outer_join2.q.out b/ql/src/test/results/clientpositive/spark/vector_outer_join2.q.out
index 38051fd..063fdde 100644
--- a/ql/src/test/results/clientpositive/spark/vector_outer_join2.q.out
+++ b/ql/src/test/results/clientpositive/spark/vector_outer_join2.q.out
@@ -198,34 +198,34 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 20 Data size: 5056 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5237 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: cint (type: int)
outputColumnNames: _col0
- Statistics: Num rows: 20 Data size: 5056 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5237 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col0 (type: int)
1 _col0 (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Map 4
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 20 Data size: 5056 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5237 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: cbigint (type: bigint)
outputColumnNames: _col0
- Statistics: Num rows: 20 Data size: 5056 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5237 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col1 (type: bigint)
1 _col0 (type: bigint)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -237,11 +237,11 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 20 Data size: 5056 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5237 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: cint (type: int), cbigint (type: bigint)
outputColumnNames: _col0, _col1
- Statistics: Num rows: 20 Data size: 5056 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5237 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -251,7 +251,7 @@ STAGE PLANS:
outputColumnNames: _col1
input vertices:
1 Map 3
- Statistics: Num rows: 22 Data size: 5561 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 22 Data size: 5760 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -261,7 +261,7 @@ STAGE PLANS:
outputColumnNames: _col1
input vertices:
1 Map 4
- Statistics: Num rows: 24 Data size: 6117 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 24 Data size: 6336 Basic stats: COMPLETE Column stats: NONE
Group By Operator
aggregations: count(), sum(_col1)
mode: hash
@@ -271,10 +271,11 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 16 Basic stats: COMPLETE Column stats: NONE
value expressions: _col0 (type: bigint), _col1 (type: bigint)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Reducer 2
+ Execution mode: vectorized
Reduce Operator Tree:
Group By Operator
aggregations: count(VALUE._col0), sum(VALUE._col1)
@@ -288,7 +289,6 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
http://git-wip-us.apache.org/repos/asf/hive/blob/fd119291/ql/src/test/results/clientpositive/spark/vector_outer_join3.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/spark/vector_outer_join3.q.out b/ql/src/test/results/clientpositive/spark/vector_outer_join3.q.out
index b029e1c..b79c590 100644
--- a/ql/src/test/results/clientpositive/spark/vector_outer_join3.q.out
+++ b/ql/src/test/results/clientpositive/spark/vector_outer_join3.q.out
@@ -198,34 +198,34 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: cint (type: int)
outputColumnNames: _col0
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col0 (type: int)
1 _col0 (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Map 4
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: cstring1 (type: string)
outputColumnNames: _col0
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col1 (type: string)
1 _col0 (type: string)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -237,11 +237,11 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: cint (type: int), cstring1 (type: string)
outputColumnNames: _col0, _col1
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -251,7 +251,7 @@ STAGE PLANS:
outputColumnNames: _col1
input vertices:
1 Map 3
- Statistics: Num rows: 22 Data size: 5544 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 22 Data size: 5743 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -260,7 +260,7 @@ STAGE PLANS:
1 _col0 (type: string)
input vertices:
1 Map 4
- Statistics: Num rows: 24 Data size: 6098 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 24 Data size: 6317 Basic stats: COMPLETE Column stats: NONE
Group By Operator
aggregations: count()
mode: hash
@@ -270,10 +270,11 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE
value expressions: _col0 (type: bigint)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Reducer 2
+ Execution mode: vectorized
Reduce Operator Tree:
Group By Operator
aggregations: count(VALUE._col0)
@@ -287,7 +288,6 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
@@ -352,34 +352,34 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: cstring2 (type: string)
outputColumnNames: _col0
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col1 (type: string)
1 _col0 (type: string)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Map 4
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: cstring1 (type: string)
outputColumnNames: _col0
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col0 (type: string)
1 _col0 (type: string)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -391,11 +391,11 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: cstring1 (type: string), cstring2 (type: string)
outputColumnNames: _col0, _col1
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -405,7 +405,7 @@ STAGE PLANS:
outputColumnNames: _col0
input vertices:
1 Map 3
- Statistics: Num rows: 22 Data size: 5544 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 22 Data size: 5743 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -414,7 +414,7 @@ STAGE PLANS:
1 _col0 (type: string)
input vertices:
1 Map 4
- Statistics: Num rows: 24 Data size: 6098 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 24 Data size: 6317 Basic stats: COMPLETE Column stats: NONE
Group By Operator
aggregations: count()
mode: hash
@@ -424,10 +424,11 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE
value expressions: _col0 (type: bigint)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Reducer 2
+ Execution mode: vectorized
Reduce Operator Tree:
Group By Operator
aggregations: count(VALUE._col0)
@@ -441,7 +442,6 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
@@ -506,34 +506,34 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: cbigint (type: bigint), cstring2 (type: string)
outputColumnNames: _col0, _col1
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col3 (type: string), _col1 (type: bigint)
1 _col1 (type: string), _col0 (type: bigint)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Map 4
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: cint (type: int), cstring1 (type: string)
outputColumnNames: _col0, _col1
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col2 (type: string), _col0 (type: int)
1 _col1 (type: string), _col0 (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -545,11 +545,11 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: cint (type: int), cbigint (type: bigint), cstring1 (type: string), cstring2 (type: string)
outputColumnNames: _col0, _col1, _col2, _col3
- Statistics: Num rows: 20 Data size: 5040 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 20 Data size: 5221 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -559,7 +559,7 @@ STAGE PLANS:
outputColumnNames: _col0, _col2
input vertices:
1 Map 3
- Statistics: Num rows: 22 Data size: 5544 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 22 Data size: 5743 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -568,7 +568,7 @@ STAGE PLANS:
1 _col1 (type: string), _col0 (type: int)
input vertices:
1 Map 4
- Statistics: Num rows: 24 Data size: 6098 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 24 Data size: 6317 Basic stats: COMPLETE Column stats: NONE
Group By Operator
aggregations: count()
mode: hash
@@ -578,10 +578,11 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE
value expressions: _col0 (type: bigint)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Reducer 2
+ Execution mode: vectorized
Reduce Operator Tree:
Group By Operator
aggregations: count(VALUE._col0)
@@ -595,7 +596,6 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
http://git-wip-us.apache.org/repos/asf/hive/blob/fd119291/ql/src/test/results/clientpositive/spark/vector_outer_join4.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/spark/vector_outer_join4.q.out b/ql/src/test/results/clientpositive/spark/vector_outer_join4.q.out
index 182dbb0..03db229 100644
--- a/ql/src/test/results/clientpositive/spark/vector_outer_join4.q.out
+++ b/ql/src/test/results/clientpositive/spark/vector_outer_join4.q.out
@@ -212,18 +212,18 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 30 Data size: 4298 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 30 Data size: 4595 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: ctinyint (type: tinyint), csmallint (type: smallint), cint (type: int), cbigint (type: bigint), cfloat (type: float), cdouble (type: double), cstring1 (type: string), cstring2 (type: string), ctimestamp1 (type: timestamp), ctimestamp2 (type: timestamp), cboolean1 (type: boolean), cboolean2 (type: boolean)
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11
- Statistics: Num rows: 30 Data size: 4298 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 30 Data size: 4595 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col2 (type: int)
1 _col2 (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -233,11 +233,11 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 30 Data size: 4298 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 30 Data size: 4595 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: ctinyint (type: tinyint), csmallint (type: smallint), cint (type: int), cbigint (type: bigint), cfloat (type: float), cdouble (type: double), cstring1 (type: string), cstring2 (type: string), ctimestamp1 (type: timestamp), ctimestamp2 (type: timestamp), cboolean1 (type: boolean), cboolean2 (type: boolean)
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11
- Statistics: Num rows: 30 Data size: 4298 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 30 Data size: 4595 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -247,17 +247,17 @@ STAGE PLANS:
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13, _col14, _col15, _col16, _col17, _col18, _col19, _col20, _col21, _col22, _col23
input vertices:
1 Map 2
- Statistics: Num rows: 33 Data size: 4727 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 33 Data size: 5054 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compressed: false
- Statistics: Num rows: 33 Data size: 4727 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 33 Data size: 5054 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
@@ -363,18 +363,18 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 30 Data size: 4298 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 30 Data size: 4595 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: ctinyint (type: tinyint)
outputColumnNames: _col0
- Statistics: Num rows: 30 Data size: 4298 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 30 Data size: 4595 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col0 (type: tinyint)
1 _col0 (type: tinyint)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -384,11 +384,11 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 30 Data size: 4298 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 30 Data size: 4595 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: ctinyint (type: tinyint)
outputColumnNames: _col0
- Statistics: Num rows: 30 Data size: 4298 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 30 Data size: 4595 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -398,17 +398,17 @@ STAGE PLANS:
outputColumnNames: _col0
input vertices:
1 Map 2
- Statistics: Num rows: 33 Data size: 4727 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 33 Data size: 5054 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compressed: false
- Statistics: Num rows: 33 Data size: 4727 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 33 Data size: 5054 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator
@@ -876,34 +876,34 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 30 Data size: 4298 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 30 Data size: 4595 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: cint (type: int)
outputColumnNames: _col0
- Statistics: Num rows: 30 Data size: 4298 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 30 Data size: 4595 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col1 (type: int)
1 _col0 (type: int)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Map 4
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 30 Data size: 4298 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 30 Data size: 4595 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: ctinyint (type: tinyint)
outputColumnNames: _col0
- Statistics: Num rows: 30 Data size: 4298 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 30 Data size: 4595 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0 _col0 (type: tinyint)
1 _col0 (type: tinyint)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Stage: Stage-1
Spark
@@ -915,11 +915,11 @@ STAGE PLANS:
Map Operator Tree:
TableScan
alias: c
- Statistics: Num rows: 30 Data size: 4298 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 30 Data size: 4595 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: ctinyint (type: tinyint), cint (type: int)
outputColumnNames: _col0, _col1
- Statistics: Num rows: 30 Data size: 4298 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 30 Data size: 4595 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -929,7 +929,7 @@ STAGE PLANS:
outputColumnNames: _col0
input vertices:
1 Map 3
- Statistics: Num rows: 33 Data size: 4727 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 33 Data size: 5054 Basic stats: COMPLETE Column stats: NONE
Map Join Operator
condition map:
Left Outer Join0 to 1
@@ -938,7 +938,7 @@ STAGE PLANS:
1 _col0 (type: tinyint)
input vertices:
1 Map 4
- Statistics: Num rows: 36 Data size: 5199 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 36 Data size: 5559 Basic stats: COMPLETE Column stats: NONE
Group By Operator
aggregations: count()
mode: hash
@@ -948,10 +948,11 @@ STAGE PLANS:
sort order:
Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE Column stats: NONE
value expressions: _col0 (type: bigint)
+ Execution mode: vectorized
Local Work:
Map Reduce Local Work
- Execution mode: vectorized
Reducer 2
+ Execution mode: vectorized
Reduce Operator Tree:
Group By Operator
aggregations: count(VALUE._col0)
@@ -965,7 +966,6 @@ STAGE PLANS:
input format: org.apache.hadoop.mapred.TextInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
- Execution mode: vectorized
Stage: Stage-0
Fetch Operator