You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hive.apache.org by jc...@apache.org on 2018/03/22 16:57:42 UTC
[19/34] hive git commit: HIVE-18979: Enable
AggregateReduceFunctionsRule from Calcite (Jesus Camacho Rodriguez,
reviewed by Ashutosh Chauhan)
http://git-wip-us.apache.org/repos/asf/hive/blob/5cb8867b/ql/src/test/results/clientpositive/parquet_vectorization_16.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/parquet_vectorization_16.q.out b/ql/src/test/results/clientpositive/parquet_vectorization_16.q.out
index 398443b..cf06c91 100644
--- a/ql/src/test/results/clientpositive/parquet_vectorization_16.q.out
+++ b/ql/src/test/results/clientpositive/parquet_vectorization_16.q.out
@@ -64,38 +64,39 @@ STAGE PLANS:
predicate: (((cdouble >= -1.389D) or (cstring1 < 'a')) and (cstring2 like '%b%')) (type: boolean)
Statistics: Num rows: 4096 Data size: 49152 Basic stats: COMPLETE Column stats: NONE
Select Operator
- expressions: cdouble (type: double), cstring1 (type: string), ctimestamp1 (type: timestamp)
- outputColumnNames: cdouble, cstring1, ctimestamp1
+ expressions: cstring1 (type: string), cdouble (type: double), ctimestamp1 (type: timestamp), (cdouble * cdouble) (type: double)
+ outputColumnNames: _col0, _col1, _col2, _col3
Select Vectorization:
className: VectorSelectOperator
native: true
- projectedOutputColumnNums: [5, 6, 8]
+ projectedOutputColumnNums: [6, 5, 8, 13]
+ selectExpressions: DoubleColMultiplyDoubleColumn(col 5:double, col 5:double) -> 13:double
Statistics: Num rows: 4096 Data size: 49152 Basic stats: COMPLETE Column stats: NONE
Group By Operator
- aggregations: count(cdouble), stddev_samp(cdouble), min(cdouble)
+ aggregations: count(_col1), sum(_col3), sum(_col1), min(_col1)
Group By Vectorization:
- aggregators: VectorUDAFCount(col 5:double) -> bigint, VectorUDAFVarDouble(col 5:double) -> struct<count:bigint,sum:double,variance:double> aggregation: stddev_samp, VectorUDAFMinDouble(col 5:double) -> double
+ aggregators: VectorUDAFCount(col 5:double) -> bigint, VectorUDAFSumDouble(col 13:double) -> double, VectorUDAFSumDouble(col 5:double) -> double, VectorUDAFMinDouble(col 5:double) -> double
className: VectorGroupByOperator
groupByMode: HASH
- keyExpressions: col 5:double, col 6:string, col 8:timestamp
+ keyExpressions: col 6:string, col 5:double, col 8:timestamp
native: false
vectorProcessingMode: HASH
- projectedOutputColumnNums: [0, 1, 2]
- keys: cdouble (type: double), cstring1 (type: string), ctimestamp1 (type: timestamp)
+ projectedOutputColumnNums: [0, 1, 2, 3]
+ keys: _col0 (type: string), _col1 (type: double), _col2 (type: timestamp)
mode: hash
- outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6
Statistics: Num rows: 4096 Data size: 49152 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
- key expressions: _col0 (type: double), _col1 (type: string), _col2 (type: timestamp)
+ key expressions: _col0 (type: string), _col1 (type: double), _col2 (type: timestamp)
sort order: +++
- Map-reduce partition columns: _col0 (type: double), _col1 (type: string), _col2 (type: timestamp)
+ Map-reduce partition columns: _col0 (type: string), _col1 (type: double), _col2 (type: timestamp)
Reduce Sink Vectorization:
className: VectorReduceSinkOperator
native: false
nativeConditionsMet: hive.vectorized.execution.reducesink.new.enabled IS true, No PTF TopN IS true, No DISTINCT columns IS true, BinarySortableSerDe for keys IS true, LazyBinarySerDe for values IS true
nativeConditionsNotMet: hive.execution.engine mr IN [tez, spark] IS false
Statistics: Num rows: 4096 Data size: 49152 Basic stats: COMPLETE Column stats: NONE
- value expressions: _col3 (type: bigint), _col4 (type: struct<count:bigint,sum:double,variance:double>), _col5 (type: double)
+ value expressions: _col3 (type: bigint), _col4 (type: double), _col5 (type: double), _col6 (type: double)
Execution mode: vectorized
Map Vectorization:
enabled: true
@@ -111,20 +112,20 @@ STAGE PLANS:
includeColumns: [5, 6, 7, 8]
dataColumns: ctinyint:tinyint, csmallint:smallint, cint:int, cbigint:bigint, cfloat:float, cdouble:double, cstring1:string, cstring2:string, ctimestamp1:timestamp, ctimestamp2:timestamp, cboolean1:boolean, cboolean2:boolean
partitionColumnCount: 0
- scratchColumnTypeNames: []
+ scratchColumnTypeNames: [double]
Reduce Vectorization:
enabled: false
enableConditionsMet: hive.vectorized.execution.reduce.enabled IS true
enableConditionsNotMet: hive.execution.engine mr IN [tez, spark] IS false
Reduce Operator Tree:
Group By Operator
- aggregations: count(VALUE._col0), stddev_samp(VALUE._col1), min(VALUE._col2)
- keys: KEY._col0 (type: double), KEY._col1 (type: string), KEY._col2 (type: timestamp)
+ aggregations: count(VALUE._col0), sum(VALUE._col1), sum(VALUE._col2), min(VALUE._col3)
+ keys: KEY._col0 (type: string), KEY._col1 (type: double), KEY._col2 (type: timestamp)
mode: mergepartial
- outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6
Statistics: Num rows: 2048 Data size: 24576 Basic stats: COMPLETE Column stats: NONE
Select Operator
- expressions: _col1 (type: string), _col0 (type: double), _col2 (type: timestamp), (_col0 - 9763215.5639D) (type: double), (- (_col0 - 9763215.5639D)) (type: double), _col3 (type: bigint), _col4 (type: double), (- _col4) (type: double), (_col4 * UDFToDouble(_col3)) (type: double), _col5 (type: double), (9763215.5639D / _col0) (type: double), (CAST( _col3 AS decimal(19,0)) / -1.389) (type: decimal(28,6)), _col4 (type: double)
+ expressions: _col0 (type: string), _col1 (type: double), _col2 (type: timestamp), (_col1 - 9763215.5639D) (type: double), (- (_col1 - 9763215.5639D)) (type: double), _col3 (type: bigint), power(((_col4 - ((_col5 * _col5) / _col3)) / CASE WHEN ((_col3 = 1L)) THEN (null) ELSE ((_col3 - 1)) END), 0.5) (type: double), (- power(((_col4 - ((_col5 * _col5) / _col3)) / CASE WHEN ((_col3 = 1L)) THEN (null) ELSE ((_col3 - 1)) END), 0.5)) (type: double), (power(((_col4 - ((_col5 * _col5) / _col3)) / CASE WHEN ((_col3 = 1L)) THEN (null) ELSE ((_col3 - 1)) END), 0.5) * UDFToDouble(_col3)) (type: double), _col6 (type: double), (9763215.5639D / _col1) (type: double), (CAST( _col3 AS decimal(19,0)) / -1.389) (type: decimal(28,6)), power(((_col4 - ((_col5 * _col5) / _col3)) / CASE WHEN ((_col3 = 1L)) THEN (null) ELSE ((_col3 - 1)) END), 0.5) (type: double)
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12
Statistics: Num rows: 2048 Data size: 24576 Basic stats: COMPLETE Column stats: NONE
File Output Operator
http://git-wip-us.apache.org/repos/asf/hive/blob/5cb8867b/ql/src/test/results/clientpositive/parquet_vectorization_2.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/parquet_vectorization_2.q.out b/ql/src/test/results/clientpositive/parquet_vectorization_2.q.out
index 6f35ea0..131797d 100644
--- a/ql/src/test/results/clientpositive/parquet_vectorization_2.q.out
+++ b/ql/src/test/results/clientpositive/parquet_vectorization_2.q.out
@@ -70,25 +70,26 @@ STAGE PLANS:
predicate: (((cdouble < UDFToDouble(ctinyint)) and ((UDFToDouble(ctimestamp2) <> -10669.0D) or (cint < 359))) or ((ctimestamp1 < ctimestamp2) and (cstring2 like 'b%') and (cfloat <= -5638.15))) (type: boolean)
Statistics: Num rows: 4778 Data size: 57336 Basic stats: COMPLETE Column stats: NONE
Select Operator
- expressions: ctinyint (type: tinyint), csmallint (type: smallint), cbigint (type: bigint), cfloat (type: float), cdouble (type: double)
- outputColumnNames: ctinyint, csmallint, cbigint, cfloat, cdouble
+ expressions: csmallint (type: smallint), cfloat (type: float), cbigint (type: bigint), ctinyint (type: tinyint), cdouble (type: double), UDFToDouble(cbigint) (type: double), (UDFToDouble(cbigint) * UDFToDouble(cbigint)) (type: double)
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6
Select Vectorization:
className: VectorSelectOperator
native: true
- projectedOutputColumnNums: [0, 1, 3, 4, 5]
+ projectedOutputColumnNums: [1, 4, 3, 0, 5, 13, 16]
+ selectExpressions: CastLongToDouble(col 3:bigint) -> 13:double, DoubleColMultiplyDoubleColumn(col 14:double, col 15:double)(children: CastLongToDouble(col 3:bigint) -> 14:double, CastLongToDouble(col 3:bigint) -> 15:double) -> 16:double
Statistics: Num rows: 4778 Data size: 57336 Basic stats: COMPLETE Column stats: NONE
Group By Operator
- aggregations: avg(csmallint), sum(cfloat), var_pop(cbigint), count(), min(ctinyint), avg(cdouble)
+ aggregations: sum(_col0), count(_col0), sum(_col1), sum(_col6), sum(_col5), count(_col2), count(), min(_col3), sum(_col4), count(_col4)
Group By Vectorization:
- aggregators: VectorUDAFAvgLong(col 1:smallint) -> struct<count:bigint,sum:double,input:smallint>, VectorUDAFSumDouble(col 4:float) -> double, VectorUDAFVarLong(col 3:bigint) -> struct<count:bigint,sum:double,variance:double> aggregation: var_pop, VectorUDAFCountStar(*) -> bigint, VectorUDAFMinLong(col 0:tinyint) -> tinyint, VectorUDAFAvgDouble(col 5:double) -> struct<count:bigint,sum:double,input:double>
+ aggregators: VectorUDAFSumLong(col 1:smallint) -> bigint, VectorUDAFCount(col 1:smallint) -> bigint, VectorUDAFSumDouble(col 4:float) -> double, VectorUDAFSumDouble(col 16:double) -> double, VectorUDAFSumDouble(col 13:double) -> double, VectorUDAFCount(col 3:bigint) -> bigint, VectorUDAFCountStar(*) -> bigint, VectorUDAFMinLong(col 0:tinyint) -> tinyint, VectorUDAFSumDouble(col 5:double) -> double, VectorUDAFCount(col 5:double) -> bigint
className: VectorGroupByOperator
groupByMode: HASH
native: false
vectorProcessingMode: HASH
- projectedOutputColumnNums: [0, 1, 2, 3, 4, 5]
+ projectedOutputColumnNums: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
mode: hash
- outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5
- Statistics: Num rows: 1 Data size: 256 Basic stats: COMPLETE Column stats: NONE
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9
+ Statistics: Num rows: 1 Data size: 76 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
sort order:
Reduce Sink Vectorization:
@@ -96,8 +97,8 @@ STAGE PLANS:
native: false
nativeConditionsMet: hive.vectorized.execution.reducesink.new.enabled IS true, No PTF TopN IS true, No DISTINCT columns IS true, BinarySortableSerDe for keys IS true, LazyBinarySerDe for values IS true
nativeConditionsNotMet: hive.execution.engine mr IN [tez, spark] IS false
- Statistics: Num rows: 1 Data size: 256 Basic stats: COMPLETE Column stats: NONE
- value expressions: _col0 (type: struct<count:bigint,sum:double,input:smallint>), _col1 (type: double), _col2 (type: struct<count:bigint,sum:double,variance:double>), _col3 (type: bigint), _col4 (type: tinyint), _col5 (type: struct<count:bigint,sum:double,input:double>)
+ Statistics: Num rows: 1 Data size: 76 Basic stats: COMPLETE Column stats: NONE
+ value expressions: _col0 (type: bigint), _col1 (type: bigint), _col2 (type: double), _col3 (type: double), _col4 (type: double), _col5 (type: bigint), _col6 (type: bigint), _col7 (type: tinyint), _col8 (type: double), _col9 (type: bigint)
Execution mode: vectorized
Map Vectorization:
enabled: true
@@ -113,24 +114,24 @@ STAGE PLANS:
includeColumns: [0, 1, 2, 3, 4, 5, 7, 8, 9]
dataColumns: ctinyint:tinyint, csmallint:smallint, cint:int, cbigint:bigint, cfloat:float, cdouble:double, cstring1:string, cstring2:string, ctimestamp1:timestamp, ctimestamp2:timestamp, cboolean1:boolean, cboolean2:boolean
partitionColumnCount: 0
- scratchColumnTypeNames: [double]
+ scratchColumnTypeNames: [double, double, double, double]
Reduce Vectorization:
enabled: false
enableConditionsMet: hive.vectorized.execution.reduce.enabled IS true
enableConditionsNotMet: hive.execution.engine mr IN [tez, spark] IS false
Reduce Operator Tree:
Group By Operator
- aggregations: avg(VALUE._col0), sum(VALUE._col1), var_pop(VALUE._col2), count(VALUE._col3), min(VALUE._col4), avg(VALUE._col5)
+ aggregations: sum(VALUE._col0), count(VALUE._col1), sum(VALUE._col2), sum(VALUE._col3), sum(VALUE._col4), count(VALUE._col5), count(VALUE._col6), min(VALUE._col7), sum(VALUE._col8), count(VALUE._col9)
mode: mergepartial
- outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5
- Statistics: Num rows: 1 Data size: 256 Basic stats: COMPLETE Column stats: NONE
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9
+ Statistics: Num rows: 1 Data size: 76 Basic stats: COMPLETE Column stats: NONE
Select Operator
- expressions: _col0 (type: double), (_col0 % -563.0D) (type: double), (_col0 + 762.0D) (type: double), _col1 (type: double), _col2 (type: double), (- _col2) (type: double), (_col1 - _col0) (type: double), _col3 (type: bigint), (- (_col1 - _col0)) (type: double), (_col2 - 762.0D) (type: double), _col4 (type: tinyint), ((- _col2) + UDFToDouble(_col4)) (type: double), _col5 (type: double), (((- _col2) + UDFToDouble(_col4)) - _col1) (type: double)
+ expressions: (_col0 / _col1) (type: double), ((_col0 / _col1) % -563.0D) (type: double), ((_col0 / _col1) + 762.0D) (type: double), _col2 (type: double), ((_col3 - ((_col4 * _col4) / _col5)) / _col5) (type: double), (- ((_col3 - ((_col4 * _col4) / _col5)) / _col5)) (type: double), (_col2 - (_col0 / _col1)) (type: double), _col6 (type: bigint), (- (_col2 - (_col0 / _col1))) (type: double), (((_col3 - ((_col4 * _col4) / _col5)) / _col5) - 762.0D) (type: double), _col7 (type: tinyint), ((- ((_col3 - ((_col4 * _col4) / _col5)) / _col5)) + UDFToDouble(_col7)) (type: double), (_col8 / _col9) (type: double), (((- ((_col3 - ((_col4 * _col4) / _col5)) / _col5)) + UDFToDouble(_col7)) - _col2) (type: double)
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13
- Statistics: Num rows: 1 Data size: 256 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 1 Data size: 76 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compressed: false
- Statistics: Num rows: 1 Data size: 256 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 1 Data size: 76 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.SequenceFileInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
@@ -190,4 +191,4 @@ WHERE (((ctimestamp1 < ctimestamp2)
POSTHOOK: type: QUERY
POSTHOOK: Input: default@alltypesparquet
#### A masked pattern was here ####
--5646.467075892857 -16.467075892856883 -4884.467075892857 -2839.634998679161 1.49936299222378778E18 -1.49936299222378778E18 2806.832077213696 3584 -2806.832077213696 1.49936299222378701E18 -64 -1.49936299222378778E18 -5650.1297631138395 -1.49936299222378496E18
+-5646.467075892857 -16.467075892856883 -4884.467075892857 -2839.634998679161 1.49936299222378906E18 -1.49936299222378906E18 2806.832077213696 3584 -2806.832077213696 1.49936299222378829E18 -64 -1.49936299222378906E18 -5650.1297631138395 -1.49936299222378624E18
http://git-wip-us.apache.org/repos/asf/hive/blob/5cb8867b/ql/src/test/results/clientpositive/parquet_vectorization_3.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/parquet_vectorization_3.q.out b/ql/src/test/results/clientpositive/parquet_vectorization_3.q.out
index 5df9c54..f98dea6 100644
--- a/ql/src/test/results/clientpositive/parquet_vectorization_3.q.out
+++ b/ql/src/test/results/clientpositive/parquet_vectorization_3.q.out
@@ -75,25 +75,26 @@ STAGE PLANS:
predicate: (((UDFToDouble(cbigint) > cdouble) and (CAST( csmallint AS decimal(8,3)) >= 79.553) and (ctimestamp1 > ctimestamp2)) or ((UDFToFloat(cint) <= cfloat) and (CAST( cbigint AS decimal(22,3)) <> 79.553) and (UDFToDouble(ctimestamp2) = -29071.0D))) (type: boolean)
Statistics: Num rows: 2503 Data size: 30036 Basic stats: COMPLETE Column stats: NONE
Select Operator
- expressions: ctinyint (type: tinyint), csmallint (type: smallint), cint (type: int), cfloat (type: float)
- outputColumnNames: ctinyint, csmallint, cint, cfloat
+ expressions: csmallint (type: smallint), ctinyint (type: tinyint), cfloat (type: float), cint (type: int), UDFToDouble(csmallint) (type: double), (UDFToDouble(csmallint) * UDFToDouble(csmallint)) (type: double), UDFToDouble(ctinyint) (type: double), (UDFToDouble(ctinyint) * UDFToDouble(ctinyint)) (type: double), UDFToDouble(cfloat) (type: double), (UDFToDouble(cfloat) * UDFToDouble(cfloat)) (type: double), UDFToDouble(cint) (type: double), (UDFToDouble(cint) * UDFToDouble(cint)) (type: double)
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11
Select Vectorization:
className: VectorSelectOperator
native: true
- projectedOutputColumnNums: [0, 1, 2, 4]
+ projectedOutputColumnNums: [1, 0, 4, 2, 13, 18, 16, 20, 4, 17, 19, 23]
+ selectExpressions: CastLongToDouble(col 1:smallint) -> 13:double, DoubleColMultiplyDoubleColumn(col 16:double, col 17:double)(children: CastLongToDouble(col 1:smallint) -> 16:double, CastLongToDouble(col 1:smallint) -> 17:double) -> 18:double, CastLongToDouble(col 0:tinyint) -> 16:double, DoubleColMultiplyDoubleColumn(col 17:double, col 19:double)(children: CastLongToDouble(col 0:tinyint) -> 17:double, CastLongToDouble(col 0:tinyint) -> 19:double) -> 20:double, DoubleColMultiplyDoubleColumn(col 4:double, col 4:double)(children: col 4:float, col 4:float) -> 17:double, CastLongToDouble(col 2:int) -> 19:double, DoubleColMultiplyDoubleColumn(col 21:double, col 22:double)(children: CastLongToDouble(col 2:int) -> 21:double, CastLongToDouble(col 2:int) -> 22:double) -> 23:double
Statistics: Num rows: 2503 Data size: 30036 Basic stats: COMPLETE Column stats: NONE
Group By Operator
- aggregations: stddev_samp(csmallint), stddev_pop(ctinyint), stddev_samp(cfloat), sum(cfloat), avg(cint), stddev_pop(cint)
+ aggregations: sum(_col5), sum(_col4), count(_col0), sum(_col7), sum(_col6), count(_col1), sum(_col9), sum(_col8), count(_col2), sum(_col2), sum(_col3), count(_col3), sum(_col11), sum(_col10)
Group By Vectorization:
- aggregators: VectorUDAFVarLong(col 1:smallint) -> struct<count:bigint,sum:double,variance:double> aggregation: stddev_samp, VectorUDAFVarLong(col 0:tinyint) -> struct<count:bigint,sum:double,variance:double> aggregation: stddev_pop, VectorUDAFVarDouble(col 4:float) -> struct<count:bigint,sum:double,variance:double> aggregation: stddev_samp, VectorUDAFSumDouble(col 4:float) -> double, VectorUDAFAvgLong(col 2:int) -> struct<count:bigint,sum:double,input:int>, VectorUDAFVarLong(col 2:int) -> struct<count:bigint,sum:double,variance:double> aggregation: stddev_pop
+ aggregators: VectorUDAFSumDouble(col 18:double) -> double, VectorUDAFSumDouble(col 13:double) -> double, VectorUDAFCount(col 1:smallint) -> bigint, VectorUDAFSumDouble(col 20:double) -> double, VectorUDAFSumDouble(col 16:double) -> double, VectorUDAFCount(col 0:tinyint) -> bigint, VectorUDAFSumDouble(col 17:double) -> double, VectorUDAFSumDouble(col 4:double) -> double, VectorUDAFCount(col 4:float) -> bigint, VectorUDAFSumDouble(col 4:float) -> double, VectorUDAFSumLong(col 2:int) -> bigint, VectorUDAFCount(col 2:int) -> bigint, VectorUDAFSumDouble(col 23:double) -> double, VectorUDAFSumDouble(col 19:double) -> double
className: VectorGroupByOperator
groupByMode: HASH
native: false
vectorProcessingMode: HASH
- projectedOutputColumnNums: [0, 1, 2, 3, 4, 5]
+ projectedOutputColumnNums: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
mode: hash
- outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5
- Statistics: Num rows: 1 Data size: 404 Basic stats: COMPLETE Column stats: NONE
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13
+ Statistics: Num rows: 1 Data size: 112 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
sort order:
Reduce Sink Vectorization:
@@ -101,8 +102,8 @@ STAGE PLANS:
native: false
nativeConditionsMet: hive.vectorized.execution.reducesink.new.enabled IS true, No PTF TopN IS true, No DISTINCT columns IS true, BinarySortableSerDe for keys IS true, LazyBinarySerDe for values IS true
nativeConditionsNotMet: hive.execution.engine mr IN [tez, spark] IS false
- Statistics: Num rows: 1 Data size: 404 Basic stats: COMPLETE Column stats: NONE
- value expressions: _col0 (type: struct<count:bigint,sum:double,variance:double>), _col1 (type: struct<count:bigint,sum:double,variance:double>), _col2 (type: struct<count:bigint,sum:double,variance:double>), _col3 (type: double), _col4 (type: struct<count:bigint,sum:double,input:int>), _col5 (type: struct<count:bigint,sum:double,variance:double>)
+ Statistics: Num rows: 1 Data size: 112 Basic stats: COMPLETE Column stats: NONE
+ value expressions: _col0 (type: double), _col1 (type: double), _col2 (type: bigint), _col3 (type: double), _col4 (type: double), _col5 (type: bigint), _col6 (type: double), _col7 (type: double), _col8 (type: bigint), _col9 (type: double), _col10 (type: bigint), _col11 (type: bigint), _col12 (type: double), _col13 (type: double)
Execution mode: vectorized
Map Vectorization:
enabled: true
@@ -118,24 +119,24 @@ STAGE PLANS:
includeColumns: [0, 1, 2, 3, 4, 5, 8, 9]
dataColumns: ctinyint:tinyint, csmallint:smallint, cint:int, cbigint:bigint, cfloat:float, cdouble:double, cstring1:string, cstring2:string, ctimestamp1:timestamp, ctimestamp2:timestamp, cboolean1:boolean, cboolean2:boolean
partitionColumnCount: 0
- scratchColumnTypeNames: [double, decimal(22,3), decimal(8,3)]
+ scratchColumnTypeNames: [double, decimal(22,3), decimal(8,3), double, double, double, double, double, double, double, double]
Reduce Vectorization:
enabled: false
enableConditionsMet: hive.vectorized.execution.reduce.enabled IS true
enableConditionsNotMet: hive.execution.engine mr IN [tez, spark] IS false
Reduce Operator Tree:
Group By Operator
- aggregations: stddev_samp(VALUE._col0), stddev_pop(VALUE._col1), stddev_samp(VALUE._col2), sum(VALUE._col3), avg(VALUE._col4), stddev_pop(VALUE._col5)
+ aggregations: sum(VALUE._col0), sum(VALUE._col1), count(VALUE._col2), sum(VALUE._col3), sum(VALUE._col4), count(VALUE._col5), sum(VALUE._col6), sum(VALUE._col7), count(VALUE._col8), sum(VALUE._col9), sum(VALUE._col10), count(VALUE._col11), sum(VALUE._col12), sum(VALUE._col13)
mode: mergepartial
- outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5
- Statistics: Num rows: 1 Data size: 404 Basic stats: COMPLETE Column stats: NONE
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13
+ Statistics: Num rows: 1 Data size: 112 Basic stats: COMPLETE Column stats: NONE
Select Operator
- expressions: _col0 (type: double), (_col0 - 10.175D) (type: double), _col1 (type: double), (_col0 * (_col0 - 10.175D)) (type: double), (- _col1) (type: double), (_col0 % 79.553D) (type: double), (- (_col0 * (_col0 - 10.175D))) (type: double), _col2 (type: double), (- _col0) (type: double), _col3 (type: double), ((- (_col0 * (_col0 - 10.175D))) / (_col0 - 10.175D)) (type: double), (- (_col0 - 10.175D)) (type: double), _col4 (type: double), (-3728.0D - _col0) (type: double), _col5 (type: double), (_col4 / _col2) (type: double)
+ expressions: power(((_col0 - ((_col1 * _col1) / _col2)) / CASE WHEN ((_col2 = 1L)) THEN (null) ELSE ((_col2 - 1)) END), 0.5) (type: double), (power(((_col0 - ((_col1 * _col1) / _col2)) / CASE WHEN ((_col2 = 1L)) THEN (null) ELSE ((_col2 - 1)) END), 0.5) - 10.175D) (type: double), power(((_col3 - ((_col4 * _col4) / _col5)) / _col5), 0.5) (type: double), (power(((_col0 - ((_col1 * _col1) / _col2)) / CASE WHEN ((_col2 = 1L)) THEN (null) ELSE ((_col2 - 1)) END), 0.5) * (power(((_col0 - ((_col1 * _col1) / _col2)) / CASE WHEN ((_col2 = 1L)) THEN (null) ELSE ((_col2 - 1)) END), 0.5) - 10.175D)) (type: double), (- power(((_col3 - ((_col4 * _col4) / _col5)) / _col5), 0.5)) (type: double), (power(((_col0 - ((_col1 * _col1) / _col2)) / CASE WHEN ((_col2 = 1L)) THEN (null) ELSE ((_col2 - 1)) END), 0.5) % 79.553D) (type: double), (- (power(((_col0 - ((_col1 * _col1) / _col2)) / CASE WHEN ((_col2 = 1L)) THEN (null) ELSE ((_col2 - 1)) END), 0.5) * (power(((_col0 - ((_col1 * _col1) / _c
ol2)) / CASE WHEN ((_col2 = 1L)) THEN (null) ELSE ((_col2 - 1)) END), 0.5) - 10.175D))) (type: double), power(((_col6 - ((_col7 * _col7) / _col8)) / CASE WHEN ((_col8 = 1L)) THEN (null) ELSE ((_col8 - 1)) END), 0.5) (type: double), (- power(((_col0 - ((_col1 * _col1) / _col2)) / CASE WHEN ((_col2 = 1L)) THEN (null) ELSE ((_col2 - 1)) END), 0.5)) (type: double), _col9 (type: double), ((- (power(((_col0 - ((_col1 * _col1) / _col2)) / CASE WHEN ((_col2 = 1L)) THEN (null) ELSE ((_col2 - 1)) END), 0.5) * (power(((_col0 - ((_col1 * _col1) / _col2)) / CASE WHEN ((_col2 = 1L)) THEN (null) ELSE ((_col2 - 1)) END), 0.5) - 10.175D))) / (power(((_col0 - ((_col1 * _col1) / _col2)) / CASE WHEN ((_col2 = 1L)) THEN (null) ELSE ((_col2 - 1)) END), 0.5) - 10.175D)) (type: double), (- (power(((_col0 - ((_col1 * _col1) / _col2)) / CASE WHEN ((_col2 = 1L)) THEN (null) ELSE ((_col2 - 1)) END), 0.5) - 10.175D)) (type: double), (_col10 / _col11) (type: double), (-3728.0D - power(((_col0 - ((_col1 * _col1)
/ _col2)) / CASE WHEN ((_col2 = 1L)) THEN (null) ELSE ((_col2 - 1)) END), 0.5)) (type: double), power(((_col12 - ((_col13 * _col13) / _col11)) / _col11), 0.5) (type: double), ((_col10 / _col11) / power(((_col6 - ((_col7 * _col7) / _col8)) / CASE WHEN ((_col8 = 1L)) THEN (null) ELSE ((_col8 - 1)) END), 0.5)) (type: double)
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13, _col14, _col15
- Statistics: Num rows: 1 Data size: 404 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 1 Data size: 112 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compressed: false
- Statistics: Num rows: 1 Data size: 404 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 1 Data size: 112 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.SequenceFileInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
@@ -200,4 +201,4 @@ WHERE (((cint <= cfloat)
POSTHOOK: type: QUERY
POSTHOOK: Input: default@alltypesparquet
#### A masked pattern was here ####
-0.0 -10.175 34.287285216637066 -0.0 -34.287285216637066 0.0 0.0 34.34690095515641 -0.0 197.89499950408936 -0.0 10.175 NULL -3728.0 NULL NULL
+0.0 -10.175 34.287285216637066 -0.0 -34.287285216637066 0.0 0.0 34.3469009551564 -0.0 197.89499950408936 -0.0 10.175 NULL -3728.0 NULL NULL
http://git-wip-us.apache.org/repos/asf/hive/blob/5cb8867b/ql/src/test/results/clientpositive/parquet_vectorization_4.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/parquet_vectorization_4.q.out b/ql/src/test/results/clientpositive/parquet_vectorization_4.q.out
index c295618..973e2bd 100644
--- a/ql/src/test/results/clientpositive/parquet_vectorization_4.q.out
+++ b/ql/src/test/results/clientpositive/parquet_vectorization_4.q.out
@@ -70,17 +70,18 @@ STAGE PLANS:
predicate: (((UDFToInteger(ctinyint) <= -89010) and (cdouble > 79.553D)) or ((cbigint <> -563L) and ((UDFToLong(ctinyint) <> cbigint) or (cdouble <= -3728.0D))) or (UDFToInteger(csmallint) >= cint)) (type: boolean)
Statistics: Num rows: 12288 Data size: 147456 Basic stats: COMPLETE Column stats: NONE
Select Operator
- expressions: ctinyint (type: tinyint), cint (type: int), cdouble (type: double)
- outputColumnNames: ctinyint, cint, cdouble
+ expressions: cint (type: int), cdouble (type: double), ctinyint (type: tinyint), (cdouble * cdouble) (type: double)
+ outputColumnNames: _col0, _col1, _col2, _col3
Select Vectorization:
className: VectorSelectOperator
native: true
- projectedOutputColumnNums: [0, 2, 5]
+ projectedOutputColumnNums: [2, 5, 0, 13]
+ selectExpressions: DoubleColMultiplyDoubleColumn(col 5:double, col 5:double) -> 13:double
Statistics: Num rows: 12288 Data size: 147456 Basic stats: COMPLETE Column stats: NONE
Group By Operator
- aggregations: sum(cint), stddev_pop(cdouble), avg(cdouble), var_pop(cdouble), min(ctinyint)
+ aggregations: sum(_col0), sum(_col3), sum(_col1), count(_col1), min(_col2)
Group By Vectorization:
- aggregators: VectorUDAFSumLong(col 2:int) -> bigint, VectorUDAFVarDouble(col 5:double) -> struct<count:bigint,sum:double,variance:double> aggregation: stddev_pop, VectorUDAFAvgDouble(col 5:double) -> struct<count:bigint,sum:double,input:double>, VectorUDAFVarDouble(col 5:double) -> struct<count:bigint,sum:double,variance:double> aggregation: var_pop, VectorUDAFMinLong(col 0:tinyint) -> tinyint
+ aggregators: VectorUDAFSumLong(col 2:int) -> bigint, VectorUDAFSumDouble(col 13:double) -> double, VectorUDAFSumDouble(col 5:double) -> double, VectorUDAFCount(col 5:double) -> bigint, VectorUDAFMinLong(col 0:tinyint) -> tinyint
className: VectorGroupByOperator
groupByMode: HASH
native: false
@@ -88,7 +89,7 @@ STAGE PLANS:
projectedOutputColumnNums: [0, 1, 2, 3, 4]
mode: hash
outputColumnNames: _col0, _col1, _col2, _col3, _col4
- Statistics: Num rows: 1 Data size: 252 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 1 Data size: 36 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
sort order:
Reduce Sink Vectorization:
@@ -96,8 +97,8 @@ STAGE PLANS:
native: false
nativeConditionsMet: hive.vectorized.execution.reducesink.new.enabled IS true, No PTF TopN IS true, No DISTINCT columns IS true, BinarySortableSerDe for keys IS true, LazyBinarySerDe for values IS true
nativeConditionsNotMet: hive.execution.engine mr IN [tez, spark] IS false
- Statistics: Num rows: 1 Data size: 252 Basic stats: COMPLETE Column stats: NONE
- value expressions: _col0 (type: bigint), _col1 (type: struct<count:bigint,sum:double,variance:double>), _col2 (type: struct<count:bigint,sum:double,input:double>), _col3 (type: struct<count:bigint,sum:double,variance:double>), _col4 (type: tinyint)
+ Statistics: Num rows: 1 Data size: 36 Basic stats: COMPLETE Column stats: NONE
+ value expressions: _col0 (type: bigint), _col1 (type: double), _col2 (type: double), _col3 (type: bigint), _col4 (type: tinyint)
Execution mode: vectorized
Map Vectorization:
enabled: true
@@ -113,24 +114,24 @@ STAGE PLANS:
includeColumns: [0, 1, 2, 3, 5]
dataColumns: ctinyint:tinyint, csmallint:smallint, cint:int, cbigint:bigint, cfloat:float, cdouble:double, cstring1:string, cstring2:string, ctimestamp1:timestamp, ctimestamp2:timestamp, cboolean1:boolean, cboolean2:boolean
partitionColumnCount: 0
- scratchColumnTypeNames: []
+ scratchColumnTypeNames: [double]
Reduce Vectorization:
enabled: false
enableConditionsMet: hive.vectorized.execution.reduce.enabled IS true
enableConditionsNotMet: hive.execution.engine mr IN [tez, spark] IS false
Reduce Operator Tree:
Group By Operator
- aggregations: sum(VALUE._col0), stddev_pop(VALUE._col1), avg(VALUE._col2), var_pop(VALUE._col3), min(VALUE._col4)
+ aggregations: sum(VALUE._col0), sum(VALUE._col1), sum(VALUE._col2), count(VALUE._col3), min(VALUE._col4)
mode: mergepartial
outputColumnNames: _col0, _col1, _col2, _col3, _col4
- Statistics: Num rows: 1 Data size: 252 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 1 Data size: 36 Basic stats: COMPLETE Column stats: NONE
Select Operator
- expressions: _col0 (type: bigint), (_col0 * -563L) (type: bigint), (-3728L + _col0) (type: bigint), _col1 (type: double), (- _col1) (type: double), _col2 (type: double), ((_col0 * -563L) % _col0) (type: bigint), (UDFToDouble(((_col0 * -563L) % _col0)) / _col2) (type: double), _col3 (type: double), (- (UDFToDouble(((_col0 * -563L) % _col0)) / _col2)) (type: double), ((-3728L + _col0) - (_col0 * -563L)) (type: bigint), _col4 (type: tinyint), _col4 (type: tinyint), (UDFToDouble(_col4) * (- (UDFToDouble(((_col0 * -563L) % _col0)) / _col2))) (type: double)
+ expressions: _col0 (type: bigint), (_col0 * -563L) (type: bigint), (-3728L + _col0) (type: bigint), power(((_col1 - ((_col2 * _col2) / _col3)) / _col3), 0.5) (type: double), (- power(((_col1 - ((_col2 * _col2) / _col3)) / _col3), 0.5)) (type: double), (_col2 / _col3) (type: double), ((_col0 * -563L) % _col0) (type: bigint), (UDFToDouble(((_col0 * -563L) % _col0)) / (_col2 / _col3)) (type: double), ((_col1 - ((_col2 * _col2) / _col3)) / _col3) (type: double), (- (UDFToDouble(((_col0 * -563L) % _col0)) / (_col2 / _col3))) (type: double), ((-3728L + _col0) - (_col0 * -563L)) (type: bigint), _col4 (type: tinyint), _col4 (type: tinyint), (UDFToDouble(_col4) * (- (UDFToDouble(((_col0 * -563L) % _col0)) / (_col2 / _col3)))) (type: double)
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13
- Statistics: Num rows: 1 Data size: 252 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 1 Data size: 36 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compressed: false
- Statistics: Num rows: 1 Data size: 252 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 1 Data size: 36 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.SequenceFileInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
@@ -190,4 +191,4 @@ WHERE (((csmallint >= cint)
POSTHOOK: type: QUERY
POSTHOOK: Input: default@alltypesparquet
#### A masked pattern was here ####
--493101012745 277615870175435 -493101016473 136727.7868296355 -136727.7868296355 2298.5515807767374 0 0.0 1.8694487691330246E10 -0.0 -278108971191908 -64 -64 0.0
+-493101012745 277615870175435 -493101016473 136727.78682963562 -136727.78682963562 2298.5515807767374 0 0.0 1.8694487691330276E10 -0.0 -278108971191908 -64 -64 0.0
http://git-wip-us.apache.org/repos/asf/hive/blob/5cb8867b/ql/src/test/results/clientpositive/parquet_vectorization_9.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/parquet_vectorization_9.q.out b/ql/src/test/results/clientpositive/parquet_vectorization_9.q.out
index 398443b..cf06c91 100644
--- a/ql/src/test/results/clientpositive/parquet_vectorization_9.q.out
+++ b/ql/src/test/results/clientpositive/parquet_vectorization_9.q.out
@@ -64,38 +64,39 @@ STAGE PLANS:
predicate: (((cdouble >= -1.389D) or (cstring1 < 'a')) and (cstring2 like '%b%')) (type: boolean)
Statistics: Num rows: 4096 Data size: 49152 Basic stats: COMPLETE Column stats: NONE
Select Operator
- expressions: cdouble (type: double), cstring1 (type: string), ctimestamp1 (type: timestamp)
- outputColumnNames: cdouble, cstring1, ctimestamp1
+ expressions: cstring1 (type: string), cdouble (type: double), ctimestamp1 (type: timestamp), (cdouble * cdouble) (type: double)
+ outputColumnNames: _col0, _col1, _col2, _col3
Select Vectorization:
className: VectorSelectOperator
native: true
- projectedOutputColumnNums: [5, 6, 8]
+ projectedOutputColumnNums: [6, 5, 8, 13]
+ selectExpressions: DoubleColMultiplyDoubleColumn(col 5:double, col 5:double) -> 13:double
Statistics: Num rows: 4096 Data size: 49152 Basic stats: COMPLETE Column stats: NONE
Group By Operator
- aggregations: count(cdouble), stddev_samp(cdouble), min(cdouble)
+ aggregations: count(_col1), sum(_col3), sum(_col1), min(_col1)
Group By Vectorization:
- aggregators: VectorUDAFCount(col 5:double) -> bigint, VectorUDAFVarDouble(col 5:double) -> struct<count:bigint,sum:double,variance:double> aggregation: stddev_samp, VectorUDAFMinDouble(col 5:double) -> double
+ aggregators: VectorUDAFCount(col 5:double) -> bigint, VectorUDAFSumDouble(col 13:double) -> double, VectorUDAFSumDouble(col 5:double) -> double, VectorUDAFMinDouble(col 5:double) -> double
className: VectorGroupByOperator
groupByMode: HASH
- keyExpressions: col 5:double, col 6:string, col 8:timestamp
+ keyExpressions: col 6:string, col 5:double, col 8:timestamp
native: false
vectorProcessingMode: HASH
- projectedOutputColumnNums: [0, 1, 2]
- keys: cdouble (type: double), cstring1 (type: string), ctimestamp1 (type: timestamp)
+ projectedOutputColumnNums: [0, 1, 2, 3]
+ keys: _col0 (type: string), _col1 (type: double), _col2 (type: timestamp)
mode: hash
- outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6
Statistics: Num rows: 4096 Data size: 49152 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
- key expressions: _col0 (type: double), _col1 (type: string), _col2 (type: timestamp)
+ key expressions: _col0 (type: string), _col1 (type: double), _col2 (type: timestamp)
sort order: +++
- Map-reduce partition columns: _col0 (type: double), _col1 (type: string), _col2 (type: timestamp)
+ Map-reduce partition columns: _col0 (type: string), _col1 (type: double), _col2 (type: timestamp)
Reduce Sink Vectorization:
className: VectorReduceSinkOperator
native: false
nativeConditionsMet: hive.vectorized.execution.reducesink.new.enabled IS true, No PTF TopN IS true, No DISTINCT columns IS true, BinarySortableSerDe for keys IS true, LazyBinarySerDe for values IS true
nativeConditionsNotMet: hive.execution.engine mr IN [tez, spark] IS false
Statistics: Num rows: 4096 Data size: 49152 Basic stats: COMPLETE Column stats: NONE
- value expressions: _col3 (type: bigint), _col4 (type: struct<count:bigint,sum:double,variance:double>), _col5 (type: double)
+ value expressions: _col3 (type: bigint), _col4 (type: double), _col5 (type: double), _col6 (type: double)
Execution mode: vectorized
Map Vectorization:
enabled: true
@@ -111,20 +112,20 @@ STAGE PLANS:
includeColumns: [5, 6, 7, 8]
dataColumns: ctinyint:tinyint, csmallint:smallint, cint:int, cbigint:bigint, cfloat:float, cdouble:double, cstring1:string, cstring2:string, ctimestamp1:timestamp, ctimestamp2:timestamp, cboolean1:boolean, cboolean2:boolean
partitionColumnCount: 0
- scratchColumnTypeNames: []
+ scratchColumnTypeNames: [double]
Reduce Vectorization:
enabled: false
enableConditionsMet: hive.vectorized.execution.reduce.enabled IS true
enableConditionsNotMet: hive.execution.engine mr IN [tez, spark] IS false
Reduce Operator Tree:
Group By Operator
- aggregations: count(VALUE._col0), stddev_samp(VALUE._col1), min(VALUE._col2)
- keys: KEY._col0 (type: double), KEY._col1 (type: string), KEY._col2 (type: timestamp)
+ aggregations: count(VALUE._col0), sum(VALUE._col1), sum(VALUE._col2), min(VALUE._col3)
+ keys: KEY._col0 (type: string), KEY._col1 (type: double), KEY._col2 (type: timestamp)
mode: mergepartial
- outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6
Statistics: Num rows: 2048 Data size: 24576 Basic stats: COMPLETE Column stats: NONE
Select Operator
- expressions: _col1 (type: string), _col0 (type: double), _col2 (type: timestamp), (_col0 - 9763215.5639D) (type: double), (- (_col0 - 9763215.5639D)) (type: double), _col3 (type: bigint), _col4 (type: double), (- _col4) (type: double), (_col4 * UDFToDouble(_col3)) (type: double), _col5 (type: double), (9763215.5639D / _col0) (type: double), (CAST( _col3 AS decimal(19,0)) / -1.389) (type: decimal(28,6)), _col4 (type: double)
+ expressions: _col0 (type: string), _col1 (type: double), _col2 (type: timestamp), (_col1 - 9763215.5639D) (type: double), (- (_col1 - 9763215.5639D)) (type: double), _col3 (type: bigint), power(((_col4 - ((_col5 * _col5) / _col3)) / CASE WHEN ((_col3 = 1L)) THEN (null) ELSE ((_col3 - 1)) END), 0.5) (type: double), (- power(((_col4 - ((_col5 * _col5) / _col3)) / CASE WHEN ((_col3 = 1L)) THEN (null) ELSE ((_col3 - 1)) END), 0.5)) (type: double), (power(((_col4 - ((_col5 * _col5) / _col3)) / CASE WHEN ((_col3 = 1L)) THEN (null) ELSE ((_col3 - 1)) END), 0.5) * UDFToDouble(_col3)) (type: double), _col6 (type: double), (9763215.5639D / _col1) (type: double), (CAST( _col3 AS decimal(19,0)) / -1.389) (type: decimal(28,6)), power(((_col4 - ((_col5 * _col5) / _col3)) / CASE WHEN ((_col3 = 1L)) THEN (null) ELSE ((_col3 - 1)) END), 0.5) (type: double)
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12
Statistics: Num rows: 2048 Data size: 24576 Basic stats: COMPLETE Column stats: NONE
File Output Operator
http://git-wip-us.apache.org/repos/asf/hive/blob/5cb8867b/ql/src/test/results/clientpositive/parquet_vectorization_limit.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/parquet_vectorization_limit.q.out b/ql/src/test/results/clientpositive/parquet_vectorization_limit.q.out
index b07cbba..59db9d1 100644
--- a/ql/src/test/results/clientpositive/parquet_vectorization_limit.q.out
+++ b/ql/src/test/results/clientpositive/parquet_vectorization_limit.q.out
@@ -222,18 +222,18 @@ STAGE PLANS:
selectExpressions: DoubleColAddDoubleScalar(col 5:double, val 1.0) -> 13:double
Statistics: Num rows: 12288 Data size: 147456 Basic stats: COMPLETE Column stats: NONE
Group By Operator
- aggregations: avg(_col1)
+ aggregations: sum(_col1), count(_col1)
Group By Vectorization:
- aggregators: VectorUDAFAvgDouble(col 13:double) -> struct<count:bigint,sum:double,input:double>
+ aggregators: VectorUDAFSumDouble(col 13:double) -> double, VectorUDAFCount(col 13:double) -> bigint
className: VectorGroupByOperator
groupByMode: HASH
keyExpressions: col 0:tinyint
native: false
vectorProcessingMode: HASH
- projectedOutputColumnNums: [0]
+ projectedOutputColumnNums: [0, 1]
keys: _col0 (type: tinyint)
mode: hash
- outputColumnNames: _col0, _col1
+ outputColumnNames: _col0, _col1, _col2
Statistics: Num rows: 12288 Data size: 147456 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
key expressions: _col0 (type: tinyint)
@@ -246,7 +246,7 @@ STAGE PLANS:
nativeConditionsNotMet: hive.execution.engine mr IN [tez, spark] IS false
Statistics: Num rows: 12288 Data size: 147456 Basic stats: COMPLETE Column stats: NONE
TopN Hash Memory Usage: 0.3
- value expressions: _col1 (type: struct<count:bigint,sum:double,input:double>)
+ value expressions: _col1 (type: double), _col2 (type: bigint)
Execution mode: vectorized
Map Vectorization:
enabled: true
@@ -269,21 +269,25 @@ STAGE PLANS:
enableConditionsNotMet: hive.execution.engine mr IN [tez, spark] IS false
Reduce Operator Tree:
Group By Operator
- aggregations: avg(VALUE._col0)
+ aggregations: sum(VALUE._col0), count(VALUE._col1)
keys: KEY._col0 (type: tinyint)
mode: mergepartial
- outputColumnNames: _col0, _col1
+ outputColumnNames: _col0, _col1, _col2
Statistics: Num rows: 6144 Data size: 73728 Basic stats: COMPLETE Column stats: NONE
- Limit
- Number of rows: 20
- Statistics: Num rows: 20 Data size: 240 Basic stats: COMPLETE Column stats: NONE
- File Output Operator
- compressed: false
+ Select Operator
+ expressions: _col0 (type: tinyint), (_col1 / _col2) (type: double)
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 6144 Data size: 73728 Basic stats: COMPLETE Column stats: NONE
+ Limit
+ Number of rows: 20
Statistics: Num rows: 20 Data size: 240 Basic stats: COMPLETE Column stats: NONE
- table:
- input format: org.apache.hadoop.mapred.SequenceFileInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ File Output Operator
+ compressed: false
+ Statistics: Num rows: 20 Data size: 240 Basic stats: COMPLETE Column stats: NONE
+ table:
+ input format: org.apache.hadoop.mapred.SequenceFileInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
Stage: Stage-0
Fetch Operator
http://git-wip-us.apache.org/repos/asf/hive/blob/5cb8867b/ql/src/test/results/clientpositive/parquet_vectorization_not.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/parquet_vectorization_not.q.out b/ql/src/test/results/clientpositive/parquet_vectorization_not.q.out
index e581007..e8fa9dd 100644
--- a/ql/src/test/results/clientpositive/parquet_vectorization_not.q.out
+++ b/ql/src/test/results/clientpositive/parquet_vectorization_not.q.out
@@ -55,4 +55,4 @@ WHERE (((cstring2 LIKE '%b%')
POSTHOOK: type: QUERY
POSTHOOK: Input: default@alltypesparquet
#### A masked pattern was here ####
--3.875652215945533E8 3.875652215945533E8 -3.875716535945533E8 1.436387455459401E9 3.875716535945533E8 0.0 2.06347151720204902E18 3.875716535945533E8 3.875652215945533E8 3.875716535945533E8 1.0 10934 -37224.52399241924 1.0517370547117279E9 -2.06347151720204902E18 1.5020929380914048E17 -64 64
+-3.875652215945533E8 3.875652215945533E8 -3.875716535945533E8 1.4363874554593508E9 3.875716535945533E8 0.0 2.06347151720190515E18 3.875716535945533E8 3.875652215945533E8 3.875716535945533E8 1.0 10934 -37224.52399241924 1.051665108770714E9 -2.06347151720190515E18 1.5020929380914048E17 -64 64
http://git-wip-us.apache.org/repos/asf/hive/blob/5cb8867b/ql/src/test/results/clientpositive/parquet_vectorization_pushdown.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/parquet_vectorization_pushdown.q.out b/ql/src/test/results/clientpositive/parquet_vectorization_pushdown.q.out
index 79024e3..b29ca9a 100644
--- a/ql/src/test/results/clientpositive/parquet_vectorization_pushdown.q.out
+++ b/ql/src/test/results/clientpositive/parquet_vectorization_pushdown.q.out
@@ -27,14 +27,14 @@ STAGE PLANS:
outputColumnNames: cbigint
Statistics: Num rows: 4096 Data size: 49152 Basic stats: COMPLETE Column stats: NONE
Group By Operator
- aggregations: avg(cbigint)
+ aggregations: sum(cbigint), count(cbigint)
mode: hash
- outputColumnNames: _col0
- Statistics: Num rows: 1 Data size: 80 Basic stats: COMPLETE Column stats: NONE
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 1 Data size: 16 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
sort order:
- Statistics: Num rows: 1 Data size: 80 Basic stats: COMPLETE Column stats: NONE
- value expressions: _col0 (type: struct<count:bigint,sum:double,input:bigint>)
+ Statistics: Num rows: 1 Data size: 16 Basic stats: COMPLETE Column stats: NONE
+ value expressions: _col0 (type: bigint), _col1 (type: bigint)
Execution mode: vectorized
Map Vectorization:
enabled: true
@@ -51,17 +51,21 @@ STAGE PLANS:
enableConditionsNotMet: hive.execution.engine mr IN [tez, spark] IS false
Reduce Operator Tree:
Group By Operator
- aggregations: avg(VALUE._col0)
+ aggregations: sum(VALUE._col0), count(VALUE._col1)
mode: mergepartial
- outputColumnNames: _col0
- Statistics: Num rows: 1 Data size: 80 Basic stats: COMPLETE Column stats: NONE
- File Output Operator
- compressed: false
- Statistics: Num rows: 1 Data size: 80 Basic stats: COMPLETE Column stats: NONE
- table:
- input format: org.apache.hadoop.mapred.SequenceFileInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 1 Data size: 16 Basic stats: COMPLETE Column stats: NONE
+ Select Operator
+ expressions: (_col0 / _col1) (type: double)
+ outputColumnNames: _col0
+ Statistics: Num rows: 1 Data size: 16 Basic stats: COMPLETE Column stats: NONE
+ File Output Operator
+ compressed: false
+ Statistics: Num rows: 1 Data size: 16 Basic stats: COMPLETE Column stats: NONE
+ table:
+ input format: org.apache.hadoop.mapred.SequenceFileInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
Stage: Stage-0
Fetch Operator
http://git-wip-us.apache.org/repos/asf/hive/blob/5cb8867b/ql/src/test/results/clientpositive/perf/spark/query1.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/perf/spark/query1.q.out b/ql/src/test/results/clientpositive/perf/spark/query1.q.out
index 58a833b..c15a6f6 100644
--- a/ql/src/test/results/clientpositive/perf/spark/query1.q.out
+++ b/ql/src/test/results/clientpositive/perf/spark/query1.q.out
@@ -210,13 +210,13 @@ STAGE PLANS:
outputColumnNames: _col1, _col2
Statistics: Num rows: 31675133 Data size: 2454207210 Basic stats: COMPLETE Column stats: NONE
Group By Operator
- aggregations: avg(_col2)
+ aggregations: sum(_col2), count(_col2)
keys: _col1 (type: int)
mode: complete
- outputColumnNames: _col0, _col1
+ outputColumnNames: _col0, _col1, _col2
Statistics: Num rows: 15837566 Data size: 1227103566 Basic stats: COMPLETE Column stats: NONE
Select Operator
- expressions: (_col1 * 1.2) (type: decimal(24,7)), true (type: boolean), _col0 (type: int)
+ expressions: ((_col1 / _col2) * 1.2) (type: decimal(38,11)), true (type: boolean), _col0 (type: int)
outputColumnNames: _col0, _col1, _col2
Statistics: Num rows: 15837566 Data size: 1227103566 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
@@ -224,7 +224,7 @@ STAGE PLANS:
sort order: +
Map-reduce partition columns: _col2 (type: int)
Statistics: Num rows: 15837566 Data size: 1227103566 Basic stats: COMPLETE Column stats: NONE
- value expressions: _col0 (type: decimal(24,7)), _col1 (type: boolean)
+ value expressions: _col0 (type: decimal(38,11)), _col1 (type: boolean)
Reducer 2
Reduce Operator Tree:
Join Operator
http://git-wip-us.apache.org/repos/asf/hive/blob/5cb8867b/ql/src/test/results/clientpositive/perf/spark/query13.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/perf/spark/query13.q.out b/ql/src/test/results/clientpositive/perf/spark/query13.q.out
index f4996dd..92d9370 100644
--- a/ql/src/test/results/clientpositive/perf/spark/query13.q.out
+++ b/ql/src/test/results/clientpositive/perf/spark/query13.q.out
@@ -308,28 +308,32 @@ STAGE PLANS:
outputColumnNames: _col6, _col8, _col9
Statistics: Num rows: 715776 Data size: 63145968 Basic stats: COMPLETE Column stats: NONE
Group By Operator
- aggregations: avg(_col6), avg(_col8), avg(_col9), sum(_col9)
+ aggregations: sum(_col6), count(_col6), sum(_col8), count(_col8), sum(_col9), count(_col9)
mode: hash
- outputColumnNames: _col0, _col1, _col2, _col3
- Statistics: Num rows: 1 Data size: 764 Basic stats: COMPLETE Column stats: NONE
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5
+ Statistics: Num rows: 1 Data size: 256 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
sort order:
- Statistics: Num rows: 1 Data size: 764 Basic stats: COMPLETE Column stats: NONE
- value expressions: _col0 (type: struct<count:bigint,sum:double,input:int>), _col1 (type: struct<count:bigint,sum:decimal(17,2),input:decimal(7,2)>), _col2 (type: struct<count:bigint,sum:decimal(17,2),input:decimal(7,2)>), _col3 (type: decimal(17,2))
+ Statistics: Num rows: 1 Data size: 256 Basic stats: COMPLETE Column stats: NONE
+ value expressions: _col0 (type: bigint), _col1 (type: bigint), _col2 (type: decimal(17,2)), _col3 (type: bigint), _col4 (type: decimal(17,2)), _col5 (type: bigint)
Reducer 6
Reduce Operator Tree:
Group By Operator
- aggregations: avg(VALUE._col0), avg(VALUE._col1), avg(VALUE._col2), sum(VALUE._col3)
+ aggregations: sum(VALUE._col0), count(VALUE._col1), sum(VALUE._col2), count(VALUE._col3), sum(VALUE._col4), count(VALUE._col5)
mode: mergepartial
- outputColumnNames: _col0, _col1, _col2, _col3
- Statistics: Num rows: 1 Data size: 764 Basic stats: COMPLETE Column stats: NONE
- File Output Operator
- compressed: false
- Statistics: Num rows: 1 Data size: 764 Basic stats: COMPLETE Column stats: NONE
- table:
- input format: org.apache.hadoop.mapred.SequenceFileInputFormat
- output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
- serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5
+ Statistics: Num rows: 1 Data size: 256 Basic stats: COMPLETE Column stats: NONE
+ Select Operator
+ expressions: (_col0 / _col1) (type: double), (_col2 / _col3) (type: decimal(37,22)), (_col4 / _col5) (type: decimal(37,22)), CAST( _col4 AS decimal(17,2)) (type: decimal(17,2))
+ outputColumnNames: _col0, _col1, _col2, _col3
+ Statistics: Num rows: 1 Data size: 256 Basic stats: COMPLETE Column stats: NONE
+ File Output Operator
+ compressed: false
+ Statistics: Num rows: 1 Data size: 256 Basic stats: COMPLETE Column stats: NONE
+ table:
+ input format: org.apache.hadoop.mapred.SequenceFileInputFormat
+ output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
+ serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
Stage: Stage-0
Fetch Operator
http://git-wip-us.apache.org/repos/asf/hive/blob/5cb8867b/ql/src/test/results/clientpositive/perf/spark/query17.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/perf/spark/query17.q.out b/ql/src/test/results/clientpositive/perf/spark/query17.q.out
index 7b12a39..4a7fd45 100644
--- a/ql/src/test/results/clientpositive/perf/spark/query17.q.out
+++ b/ql/src/test/results/clientpositive/perf/spark/query17.q.out
@@ -353,29 +353,33 @@ STAGE PLANS:
input vertices:
1 Map 16
Statistics: Num rows: 843315281 Data size: 74397518956 Basic stats: COMPLETE Column stats: NONE
- Group By Operator
- aggregations: count(_col5), avg(_col5), stddev_samp(_col5), count(_col21), avg(_col21), stddev_samp(_col21), count(_col14), avg(_col14), stddev_samp(_col14)
- keys: _col9 (type: string), _col10 (type: string), _col25 (type: string)
- mode: hash
+ Select Operator
+ expressions: _col9 (type: string), _col10 (type: string), _col25 (type: string), _col5 (type: int), _col21 (type: int), _col14 (type: int), UDFToDouble(_col5) (type: double), (UDFToDouble(_col5) * UDFToDouble(_col5)) (type: double), UDFToDouble(_col21) (type: double), (UDFToDouble(_col21) * UDFToDouble(_col21)) (type: double), UDFToDouble(_col14) (type: double), (UDFToDouble(_col14) * UDFToDouble(_col14)) (type: double)
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11
Statistics: Num rows: 843315281 Data size: 74397518956 Basic stats: COMPLETE Column stats: NONE
- Reduce Output Operator
- key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string)
- sort order: +++
- Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: string)
+ Group By Operator
+ aggregations: count(_col3), sum(_col3), sum(_col7), sum(_col6), count(_col4), sum(_col4), sum(_col9), sum(_col8), count(_col5), sum(_col5), sum(_col11), sum(_col10)
+ keys: _col0 (type: string), _col1 (type: string), _col2 (type: string)
+ mode: hash
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13, _col14
Statistics: Num rows: 843315281 Data size: 74397518956 Basic stats: COMPLETE Column stats: NONE
- TopN Hash Memory Usage: 0.1
- value expressions: _col3 (type: bigint), _col4 (type: struct<count:bigint,sum:double,input:int>), _col5 (type: struct<count:bigint,sum:double,variance:double>), _col6 (type: bigint), _col7 (type: struct<count:bigint,sum:double,input:int>), _col8 (type: struct<count:bigint,sum:double,variance:double>), _col9 (type: bigint), _col10 (type: struct<count:bigint,sum:double,input:int>), _col11 (type: struct<count:bigint,sum:double,variance:double>)
+ Reduce Output Operator
+ key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string)
+ sort order: +++
+ Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: string)
+ Statistics: Num rows: 843315281 Data size: 74397518956 Basic stats: COMPLETE Column stats: NONE
+ TopN Hash Memory Usage: 0.1
+ value expressions: _col3 (type: bigint), _col4 (type: bigint), _col5 (type: double), _col6 (type: double), _col7 (type: bigint), _col8 (type: bigint), _col9 (type: double), _col10 (type: double), _col11 (type: bigint), _col12 (type: bigint), _col13 (type: double), _col14 (type: double)
Reducer 5
Reduce Operator Tree:
Group By Operator
- aggregations: count(VALUE._col0), avg(VALUE._col1), stddev_samp(VALUE._col2), count(VALUE._col3), avg(VALUE._col4), stddev_samp(VALUE._col5), count(VALUE._col6), avg(VALUE._col7), stddev_samp(VALUE._col8)
+ aggregations: count(VALUE._col0), sum(VALUE._col1), sum(VALUE._col2), sum(VALUE._col3), count(VALUE._col4), sum(VALUE._col5), sum(VALUE._col6), sum(VALUE._col7), count(VALUE._col8), sum(VALUE._col9), sum(VALUE._col10), sum(VALUE._col11)
keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 (type: string)
mode: mergepartial
- outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13, _col14
Statistics: Num rows: 421657640 Data size: 37198759433 Basic stats: COMPLETE Column stats: NONE
Select Operator
- expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: bigint), _col4 (type: double), _col5 (type: double), (_col5 / _col4) (type: double), _col6 (type: bigint), _col7 (type: double), _col8 (type: double), (_col8 / _col7) (type: double), _col9 (type: bigint), _col10 (type: double), (_col11 / _col10) (type: double)
+ expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: bigint), (_col4 / _col3) (type: double), power(((_col5 - ((_col6 * _col6) / _col3)) / CASE WHEN ((_col3 = 1L)) THEN (null) ELSE ((_col3 - 1)) END), 0.5) (type: double), (power(((_col5 - ((_col6 * _col6) / _col3)) / CASE WHEN ((_col3 = 1L)) THEN (null) ELSE ((_col3 - 1)) END), 0.5) / (_col4 / _col3)) (type: double), _col7 (type: bigint), (_col8 / _col7) (type: double), power(((_col9 - ((_col10 * _col10) / _col7)) / CASE WHEN ((_col7 = 1L)) THEN (null) ELSE ((_col7 - 1)) END), 0.5) (type: double), (power(((_col9 - ((_col10 * _col10) / _col7)) / CASE WHEN ((_col7 = 1L)) THEN (null) ELSE ((_col7 - 1)) END), 0.5) / (_col8 / _col7)) (type: double), _col11 (type: bigint), (_col12 / _col11) (type: double), (power(((_col13 - ((_col14 * _col14) / _col11)) / CASE WHEN ((_col11 = 1L)) THEN (null) ELSE ((_col11 - 1)) END), 0.5) / (_col12 / _col11)) (type: double)
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13
Statistics: Num rows: 421657640 Data size: 37198759433 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
http://git-wip-us.apache.org/repos/asf/hive/blob/5cb8867b/ql/src/test/results/clientpositive/perf/spark/query18.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/perf/spark/query18.q.out b/ql/src/test/results/clientpositive/perf/spark/query18.q.out
index 0da17da..cb3c114 100644
--- a/ql/src/test/results/clientpositive/perf/spark/query18.q.out
+++ b/ql/src/test/results/clientpositive/perf/spark/query18.q.out
@@ -305,28 +305,28 @@ STAGE PLANS:
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10
Statistics: Num rows: 421645953 Data size: 57099332415 Basic stats: COMPLETE Column stats: NONE
Group By Operator
- aggregations: avg(_col4), avg(_col5), avg(_col6), avg(_col7), avg(_col8), avg(_col9), avg(_col10)
+ aggregations: sum(_col4), count(_col4), sum(_col5), count(_col5), sum(_col6), count(_col6), sum(_col7), count(_col7), sum(_col8), count(_col8), sum(_col9), count(_col9), sum(_col10), count(_col10)
keys: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), 0L (type: bigint)
mode: hash
- outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13, _col14, _col15, _col16, _col17, _col18
Statistics: Num rows: 2108229765 Data size: 285496662075 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), _col4 (type: bigint)
sort order: +++++
Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), _col4 (type: bigint)
Statistics: Num rows: 2108229765 Data size: 285496662075 Basic stats: COMPLETE Column stats: NONE
- value expressions: _col5 (type: struct<count:bigint,sum:decimal(22,2),input:decimal(12,2)>), _col6 (type: struct<count:bigint,sum:decimal(22,2),input:decimal(12,2)>), _col7 (type: struct<count:bigint,sum:decimal(22,2),input:decimal(12,2)>), _col8 (type: struct<count:bigint,sum:decimal(22,2),input:decimal(12,2)>), _col9 (type: struct<count:bigint,sum:decimal(22,2),input:decimal(12,2)>), _col10 (type: struct<count:bigint,sum:decimal(22,2),input:decimal(12,2)>), _col11 (type: struct<count:bigint,sum:decimal(22,2),input:decimal(12,2)>)
+ value expressions: _col5 (type: decimal(22,2)), _col6 (type: bigint), _col7 (type: decimal(22,2)), _col8 (type: bigint), _col9 (type: decimal(22,2)), _col10 (type: bigint), _col11 (type: decimal(22,2)), _col12 (type: bigint), _col13 (type: decimal(22,2)), _col14 (type: bigint), _col15 (type: decimal(22,2)), _col16 (type: bigint), _col17 (type: decimal(22,2)), _col18 (type: bigint)
Reducer 5
Reduce Operator Tree:
Group By Operator
- aggregations: avg(VALUE._col0), avg(VALUE._col1), avg(VALUE._col2), avg(VALUE._col3), avg(VALUE._col4), avg(VALUE._col5), avg(VALUE._col6)
+ aggregations: sum(VALUE._col0), count(VALUE._col1), sum(VALUE._col2), count(VALUE._col3), sum(VALUE._col4), count(VALUE._col5), sum(VALUE._col6), count(VALUE._col7), sum(VALUE._col8), count(VALUE._col9), sum(VALUE._col10), count(VALUE._col11), sum(VALUE._col12), count(VALUE._col13)
keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 (type: string), KEY._col3 (type: string), KEY._col4 (type: bigint)
mode: mergepartial
- outputColumnNames: _col0, _col1, _col2, _col3, _col5, _col6, _col7, _col8, _col9, _col10, _col11
+ outputColumnNames: _col0, _col1, _col2, _col3, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13, _col14, _col15, _col16, _col17, _col18
Statistics: Num rows: 1054114882 Data size: 142748330969 Basic stats: COMPLETE Column stats: NONE
pruneGroupingSetId: true
Select Operator
- expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), _col5 (type: decimal(16,6)), _col6 (type: decimal(16,6)), _col7 (type: decimal(16,6)), _col8 (type: decimal(16,6)), _col9 (type: decimal(16,6)), _col10 (type: decimal(16,6)), _col11 (type: decimal(16,6))
+ expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), (_col5 / _col6) (type: decimal(38,18)), (_col7 / _col8) (type: decimal(38,18)), (_col9 / _col10) (type: decimal(38,18)), (_col11 / _col12) (type: decimal(38,18)), (_col13 / _col14) (type: decimal(38,18)), (_col15 / _col16) (type: decimal(38,18)), (_col17 / _col18) (type: decimal(38,18))
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10
Statistics: Num rows: 1054114882 Data size: 142748330969 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
@@ -334,11 +334,11 @@ STAGE PLANS:
sort order: ++++
Statistics: Num rows: 1054114882 Data size: 142748330969 Basic stats: COMPLETE Column stats: NONE
TopN Hash Memory Usage: 0.1
- value expressions: _col4 (type: decimal(16,6)), _col5 (type: decimal(16,6)), _col6 (type: decimal(16,6)), _col7 (type: decimal(16,6)), _col8 (type: decimal(16,6)), _col9 (type: decimal(16,6)), _col10 (type: decimal(16,6))
+ value expressions: _col4 (type: decimal(38,18)), _col5 (type: decimal(38,18)), _col6 (type: decimal(38,18)), _col7 (type: decimal(38,18)), _col8 (type: decimal(38,18)), _col9 (type: decimal(38,18)), _col10 (type: decimal(38,18))
Reducer 6
Reduce Operator Tree:
Select Operator
- expressions: KEY.reducesinkkey3 (type: string), KEY.reducesinkkey0 (type: string), KEY.reducesinkkey1 (type: string), KEY.reducesinkkey2 (type: string), VALUE._col0 (type: decimal(16,6)), VALUE._col1 (type: decimal(16,6)), VALUE._col2 (type: decimal(16,6)), VALUE._col3 (type: decimal(16,6)), VALUE._col4 (type: decimal(16,6)), VALUE._col5 (type: decimal(16,6)), VALUE._col6 (type: decimal(16,6))
+ expressions: KEY.reducesinkkey3 (type: string), KEY.reducesinkkey0 (type: string), KEY.reducesinkkey1 (type: string), KEY.reducesinkkey2 (type: string), VALUE._col0 (type: decimal(38,18)), VALUE._col1 (type: decimal(38,18)), VALUE._col2 (type: decimal(38,18)), VALUE._col3 (type: decimal(38,18)), VALUE._col4 (type: decimal(38,18)), VALUE._col5 (type: decimal(38,18)), VALUE._col6 (type: decimal(38,18))
outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10
Statistics: Num rows: 1054114882 Data size: 142748330969 Basic stats: COMPLETE Column stats: NONE
Limit
http://git-wip-us.apache.org/repos/asf/hive/blob/5cb8867b/ql/src/test/results/clientpositive/perf/spark/query22.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/perf/spark/query22.q.out b/ql/src/test/results/clientpositive/perf/spark/query22.q.out
index 0353312..1837397 100644
--- a/ql/src/test/results/clientpositive/perf/spark/query22.q.out
+++ b/ql/src/test/results/clientpositive/perf/spark/query22.q.out
@@ -164,28 +164,28 @@ STAGE PLANS:
outputColumnNames: _col3, _col8, _col9, _col10, _col11
Statistics: Num rows: 50024305 Data size: 790375939 Basic stats: COMPLETE Column stats: NONE
Group By Operator
- aggregations: avg(_col3)
+ aggregations: sum(_col3), count(_col3)
keys: _col8 (type: string), _col9 (type: string), _col10 (type: string), _col11 (type: string), 0L (type: bigint)
mode: hash
- outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6
Statistics: Num rows: 250121525 Data size: 3951879695 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
key expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), _col4 (type: bigint)
sort order: +++++
Map-reduce partition columns: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: string), _col4 (type: bigint)
Statistics: Num rows: 250121525 Data size: 3951879695 Basic stats: COMPLETE Column stats: NONE
- value expressions: _col5 (type: struct<count:bigint,sum:double,input:int>)
+ value expressions: _col5 (type: bigint), _col6 (type: bigint)
Reducer 3
Reduce Operator Tree:
Group By Operator
- aggregations: avg(VALUE._col0)
+ aggregations: sum(VALUE._col0), count(VALUE._col1)
keys: KEY._col0 (type: string), KEY._col1 (type: string), KEY._col2 (type: string), KEY._col3 (type: string), KEY._col4 (type: bigint)
mode: mergepartial
- outputColumnNames: _col0, _col1, _col2, _col3, _col5
+ outputColumnNames: _col0, _col1, _col2, _col3, _col5, _col6
Statistics: Num rows: 125060762 Data size: 1975939839 Basic stats: COMPLETE Column stats: NONE
pruneGroupingSetId: true
Select Operator
- expressions: _col3 (type: string), _col0 (type: string), _col1 (type: string), _col2 (type: string), _col5 (type: double)
+ expressions: _col3 (type: string), _col0 (type: string), _col1 (type: string), _col2 (type: string), (_col5 / _col6) (type: double)
outputColumnNames: _col0, _col1, _col2, _col3, _col4
Statistics: Num rows: 125060762 Data size: 1975939839 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
http://git-wip-us.apache.org/repos/asf/hive/blob/5cb8867b/ql/src/test/results/clientpositive/perf/spark/query24.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/perf/spark/query24.q.out b/ql/src/test/results/clientpositive/perf/spark/query24.q.out
index 54c607a..1f291c0 100644
--- a/ql/src/test/results/clientpositive/perf/spark/query24.q.out
+++ b/ql/src/test/results/clientpositive/perf/spark/query24.q.out
@@ -323,27 +323,27 @@ STAGE PLANS:
outputColumnNames: _col10
Statistics: Num rows: 463823414 Data size: 40918636268 Basic stats: COMPLETE Column stats: NONE
Group By Operator
- aggregations: avg(_col10)
+ aggregations: sum(_col10), count(_col10)
mode: hash
- outputColumnNames: _col0
- Statistics: Num rows: 1 Data size: 400 Basic stats: COMPLETE Column stats: NONE
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 1 Data size: 232 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
sort order:
- Statistics: Num rows: 1 Data size: 400 Basic stats: COMPLETE Column stats: NONE
- value expressions: _col0 (type: struct<count:bigint,sum:decimal(27,2),input:decimal(17,2)>)
+ Statistics: Num rows: 1 Data size: 232 Basic stats: COMPLETE Column stats: NONE
+ value expressions: _col0 (type: decimal(27,2)), _col1 (type: bigint)
Reducer 18
Local Work:
Map Reduce Local Work
Reduce Operator Tree:
Group By Operator
- aggregations: avg(VALUE._col0)
+ aggregations: sum(VALUE._col0), count(VALUE._col1)
mode: mergepartial
- outputColumnNames: _col0
- Statistics: Num rows: 1 Data size: 400 Basic stats: COMPLETE Column stats: NONE
+ outputColumnNames: _col0, _col1
+ Statistics: Num rows: 1 Data size: 232 Basic stats: COMPLETE Column stats: NONE
Select Operator
- expressions: (0.05 * _col0) (type: decimal(24,8))
+ expressions: (0.05 * (_col0 / _col1)) (type: decimal(38,12))
outputColumnNames: _col0
- Statistics: Num rows: 1 Data size: 400 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 1 Data size: 232 Basic stats: COMPLETE Column stats: NONE
Spark HashTable Sink Operator
keys:
0
@@ -586,17 +586,17 @@ STAGE PLANS:
outputColumnNames: _col0, _col1, _col2, _col3, _col4
input vertices:
1 Reducer 18
- Statistics: Num rows: 231911707 Data size: 113455912641 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 231911707 Data size: 74494745865 Basic stats: COMPLETE Column stats: NONE
Filter Operator
predicate: (_col3 > _col4) (type: boolean)
- Statistics: Num rows: 77303902 Data size: 37818637383 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 77303902 Data size: 24831581847 Basic stats: COMPLETE Column stats: NONE
Select Operator
expressions: _col0 (type: string), _col1 (type: string), _col2 (type: string), _col3 (type: decimal(27,2))
outputColumnNames: _col0, _col1, _col2, _col3
- Statistics: Num rows: 77303902 Data size: 37818637383 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 77303902 Data size: 24831581847 Basic stats: COMPLETE Column stats: NONE
File Output Operator
compressed: false
- Statistics: Num rows: 77303902 Data size: 37818637383 Basic stats: COMPLETE Column stats: NONE
+ Statistics: Num rows: 77303902 Data size: 24831581847 Basic stats: COMPLETE Column stats: NONE
table:
input format: org.apache.hadoop.mapred.SequenceFileInputFormat
output format: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
http://git-wip-us.apache.org/repos/asf/hive/blob/5cb8867b/ql/src/test/results/clientpositive/perf/spark/query26.q.out
----------------------------------------------------------------------
diff --git a/ql/src/test/results/clientpositive/perf/spark/query26.q.out b/ql/src/test/results/clientpositive/perf/spark/query26.q.out
index b0f64e1..e9e54e4 100644
--- a/ql/src/test/results/clientpositive/perf/spark/query26.q.out
+++ b/ql/src/test/results/clientpositive/perf/spark/query26.q.out
@@ -202,10 +202,10 @@ STAGE PLANS:
outputColumnNames: _col4, _col5, _col6, _col7, _col18
Statistics: Num rows: 421645953 Data size: 57099332415 Basic stats: COMPLETE Column stats: NONE
Group By Operator
- aggregations: avg(_col4), avg(_col5), avg(_col7), avg(_col6)
+ aggregations: sum(_col4), count(_col4), sum(_col5), count(_col5), sum(_col7), count(_col7), sum(_col6), count(_col6)
keys: _col18 (type: string)
mode: hash
- outputColumnNames: _col0, _col1, _col2, _col3, _col4
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8
Statistics: Num rows: 421645953 Data size: 57099332415 Basic stats: COMPLETE Column stats: NONE
Reduce Output Operator
key expressions: _col0 (type: string)
@@ -213,25 +213,29 @@ STAGE PLANS:
Map-reduce partition columns: _col0 (type: string)
Statistics: Num rows: 421645953 Data size: 57099332415 Basic stats: COMPLETE Column stats: NONE
TopN Hash Memory Usage: 0.1
- value expressions: _col1 (type: struct<count:bigint,sum:double,input:int>), _col2 (type: struct<count:bigint,sum:decimal(17,2),input:decimal(7,2)>), _col3 (type: struct<count:bigint,sum:decimal(17,2),input:decimal(7,2)>), _col4 (type: struct<count:bigint,sum:decimal(17,2),input:decimal(7,2)>)
+ value expressions: _col1 (type: bigint), _col2 (type: bigint), _col3 (type: decimal(17,2)), _col4 (type: bigint), _col5 (type: decimal(17,2)), _col6 (type: bigint), _col7 (type: decimal(17,2)), _col8 (type: bigint)
Reducer 5
Reduce Operator Tree:
Group By Operator
- aggregations: avg(VALUE._col0), avg(VALUE._col1), avg(VALUE._col2), avg(VALUE._col3)
+ aggregations: sum(VALUE._col0), count(VALUE._col1), sum(VALUE._col2), count(VALUE._col3), sum(VALUE._col4), count(VALUE._col5), sum(VALUE._col6), count(VALUE._col7)
keys: KEY._col0 (type: string)
mode: mergepartial
- outputColumnNames: _col0, _col1, _col2, _col3, _col4
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8
Statistics: Num rows: 210822976 Data size: 28549666139 Basic stats: COMPLETE Column stats: NONE
- Reduce Output Operator
- key expressions: _col0 (type: string)
- sort order: +
+ Select Operator
+ expressions: _col0 (type: string), (_col1 / _col2) (type: double), (_col3 / _col4) (type: decimal(37,22)), (_col5 / _col6) (type: decimal(37,22)), (_col7 / _col8) (type: decimal(37,22))
+ outputColumnNames: _col0, _col1, _col2, _col3, _col4
Statistics: Num rows: 210822976 Data size: 28549666139 Basic stats: COMPLETE Column stats: NONE
- TopN Hash Memory Usage: 0.1
- value expressions: _col1 (type: double), _col2 (type: decimal(11,6)), _col3 (type: decimal(11,6)), _col4 (type: decimal(11,6))
+ Reduce Output Operator
+ key expressions: _col0 (type: string)
+ sort order: +
+ Statistics: Num rows: 210822976 Data size: 28549666139 Basic stats: COMPLETE Column stats: NONE
+ TopN Hash Memory Usage: 0.1
+ value expressions: _col1 (type: double), _col2 (type: decimal(37,22)), _col3 (type: decimal(37,22)), _col4 (type: decimal(37,22))
Reducer 6
Reduce Operator Tree:
Select Operator
- expressions: KEY.reducesinkkey0 (type: string), VALUE._col0 (type: double), VALUE._col1 (type: decimal(11,6)), VALUE._col2 (type: decimal(11,6)), VALUE._col3 (type: decimal(11,6))
+ expressions: KEY.reducesinkkey0 (type: string), VALUE._col0 (type: double), VALUE._col1 (type: decimal(37,22)), VALUE._col2 (type: decimal(37,22)), VALUE._col3 (type: decimal(37,22))
outputColumnNames: _col0, _col1, _col2, _col3, _col4
Statistics: Num rows: 210822976 Data size: 28549666139 Basic stats: COMPLETE Column stats: NONE
Limit