You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Stamatis Zampetakis (Jira)" <ji...@apache.org> on 2022/12/20 15:30:00 UTC

[jira] [Commented] (HIVE-26877) Parquet CTAS with JOIN on decimals with different precision/scale fail

    [ https://issues.apache.org/jira/browse/HIVE-26877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17649831#comment-17649831 ] 

Stamatis Zampetakis commented on HIVE-26877:
--------------------------------------------

The EXPLAIN plan for the CTAS query is shown below:
{noformat}
STAGE DEPENDENCIES:
  Stage-1 is a root stage
  Stage-2 depends on stages: Stage-1
  Stage-3 depends on stages: Stage-2, Stage-0
  Stage-0 depends on stages: Stage-1

STAGE PLANS:
  Stage: Stage-1
    Tez
#### A masked pattern was here ####
      Edges:
        Reducer 2 <- Map 1 (SIMPLE_EDGE), Map 3 (SIMPLE_EDGE)
#### A masked pattern was here ####
      Vertices:
        Map 1 
            Map Operator Tree:
                TableScan
                  alias: table_a
                  Statistics: Num rows: 1 Data size: 112 Basic stats: COMPLETE Column stats: NONE
                  Select Operator
                    expressions: col_dec (type: decimal(5,0))
                    outputColumnNames: _col0
                    Statistics: Num rows: 1 Data size: 112 Basic stats: COMPLETE Column stats: NONE
                    Reduce Output Operator
                      key expressions: _col0 (type: decimal(38,10))
                      null sort order: z
                      sort order: +
                      Map-reduce partition columns: _col0 (type: decimal(38,10))
                      Statistics: Num rows: 1 Data size: 112 Basic stats: COMPLETE Column stats: NONE
            Execution mode: vectorized, llap
            LLAP IO: all inputs
        Map 3 
            Map Operator Tree:
                TableScan
                  alias: table_b
                  filterExpr: col_dec is not null (type: boolean)
                  Statistics: Num rows: 1 Data size: 112 Basic stats: COMPLETE Column stats: NONE
                  Filter Operator
                    predicate: col_dec is not null (type: boolean)
                    Statistics: Num rows: 1 Data size: 112 Basic stats: COMPLETE Column stats: NONE
                    Select Operator
                      expressions: col_dec (type: decimal(38,10))
                      outputColumnNames: _col0
                      Statistics: Num rows: 1 Data size: 112 Basic stats: COMPLETE Column stats: NONE
                      Reduce Output Operator
                        key expressions: _col0 (type: decimal(38,10))
                        null sort order: z
                        sort order: +
                        Map-reduce partition columns: _col0 (type: decimal(38,10))
                        Statistics: Num rows: 1 Data size: 112 Basic stats: COMPLETE Column stats: NONE
            Execution mode: vectorized, llap
            LLAP IO: all inputs
        Reducer 2 
            Execution mode: llap
            Reduce Operator Tree:
              Merge Join Operator
                condition map:
                     Left Outer Join 0 to 1
                keys:
                  0 _col0 (type: decimal(38,10))
                  1 _col0 (type: decimal(38,10))
                outputColumnNames: _col0
                Statistics: Num rows: 1 Data size: 123 Basic stats: COMPLETE Column stats: NONE
                File Output Operator
                  compressed: false
                  Statistics: Num rows: 1 Data size: 123 Basic stats: COMPLETE Column stats: NONE
                  table:
                      input format: org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
                      output format: org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat
                      serde: org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe
                      name: default.target

  Stage: Stage-2
    Dependency Collection

  Stage: Stage-3
    Create Table
      columns: col_dec decimal(5,0)
      name: default.target
      input format: org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
      output format: org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat
      serde name: org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe

  Stage: Stage-0
    Move Operator
      files:
          hdfs directory: true
{noformat}
Observe that the target Parquet table expects a decimal(5,0) and according to the plan we try to write a value with of type decimal(38,10).

The decimal values in Parquet files are written as binaries with fixed length . For decimal (5,0) the binary field length is 3 while a decimal (38,10) needs a binary field of length 16 and thus the write fails with the following exception.

{noformat}
Caused by: java.lang.IllegalArgumentException: Fixed Binary size 16 does not match field type length 3
{noformat}

Most likely the fix should be somewhere in the type derivation logic of the Parquet target table; instead of decimal(5,0) it should be decimal(38,10) that is the type after the join.

> Parquet CTAS with JOIN on decimals with different precision/scale fail
> ----------------------------------------------------------------------
>
>                 Key: HIVE-26877
>                 URL: https://issues.apache.org/jira/browse/HIVE-26877
>             Project: Hive
>          Issue Type: Bug
>          Components: HiveServer2
>    Affects Versions: 4.0.0-alpha-2
>            Reporter: Stamatis Zampetakis
>            Assignee: Stamatis Zampetakis
>            Priority: Major
>         Attachments: ctas_parquet_join.q
>
>
> Creating a Parquet table using CREATE TABLE AS SELECT syntax (CTAS) leads to runtime error when the SELECT statement joins columns with different precision/scale.
> Steps to reproduce:
> {code:sql}
> CREATE TABLE table_a (col_dec decimal(5,0));
> CREATE TABLE table_b(col_dec decimal(38,10));
> INSERT INTO table_a VALUES (1);
> INSERT INTO table_b VALUES (1.0000000000);
> set hive.default.fileformat=parquet;
> create table target as
> select table_a.col_dec
> from table_a
> left outer join table_b on
> table_a.col_dec = table_b.col_dec;
> {code}
> Stacktrace:
> {noformat}
> 2022-12-20T07:02:52,237  INFO [2dfbd95a-7553-467b-b9d0-629100785502 Listener at 0.0.0.0/46609] reexec.ReExecuteLostAMQueryPlugin: Got exception message: Vertex failed, vertexName=Reducer 2, vertexId=vertex_1671548565336_0001_3_02, diagnostics=[Task failed, taskId=task_1671548565336_0001_3_02_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1671548565336_0001_3_02_000000_0:java.lang.RuntimeException: java.lang.RuntimeException: Hive Runtime Error while closing operators: Fixed Binary size 16 does not match field type length 3
> 	at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:348)
> 	at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:276)
> 	at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:381)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:82)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:69)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:69)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:39)
> 	at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> 	at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:118)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: Hive Runtime Error while closing operators: Fixed Binary size 16 does not match field type length 3
> 	at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:379)
> 	at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:310)
> 	... 15 more
> Caused by: java.lang.IllegalArgumentException: Fixed Binary size 16 does not match field type length 3
> 	at org.apache.parquet.column.values.plain.FixedLenByteArrayPlainValuesWriter.writeBytes(FixedLenByteArrayPlainValuesWriter.java:56)
> 	at org.apache.parquet.column.impl.ColumnWriterBase.write(ColumnWriterBase.java:174)
> 	at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.addBinary(MessageColumnIO.java:476)
> 	at org.apache.parquet.io.RecordConsumerLoggingWrapper.addBinary(RecordConsumerLoggingWrapper.java:116)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$DecimalDataWriter.write(DataWritableWriter.java:571)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$GroupDataWriter.write(DataWritableWriter.java:228)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$MessageDataWriter.write(DataWritableWriter.java:251)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:115)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:76)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:35)
> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)
> 	at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:182)
> 	at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:44)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:161)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:174)
> 	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:1160)
> 	at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:888)
> 	at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.internalForward(CommonJoinOperator.java:889)
> 	at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:921)
> 	at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:1013)
> 	at org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinObject(CommonMergeJoinOperator.java:419)
> 	at org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinOneGroup(CommonMergeJoinOperator.java:382)
> 	at org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinOneGroup(CommonMergeJoinOperator.java:372)
> 	at org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinFinalLeftData(CommonMergeJoinOperator.java:584)
> 	at org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.close(CommonMergeJoinOperator.java:502)
> 	at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:356)
> 	... 16 more
> ], TaskAttempt 1 failed, info=[Error: Error while running task ( failure ) : attempt_1671548565336_0001_3_02_000000_1:java.lang.RuntimeException: java.lang.RuntimeException: Hive Runtime Error while closing operators: Fixed Binary size 16 does not match field type length 3
> 	at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:348)
> 	at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:276)
> 	at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:381)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:82)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:69)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:69)
> 	at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:39)
> 	at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> 	at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:118)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: Hive Runtime Error while closing operators: Fixed Binary size 16 does not match field type length 3
> 	at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:379)
> 	at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:310)
> 	... 15 more
> Caused by: java.lang.IllegalArgumentException: Fixed Binary size 16 does not match field type length 3
> 	at org.apache.parquet.column.values.plain.FixedLenByteArrayPlainValuesWriter.writeBytes(FixedLenByteArrayPlainValuesWriter.java:56)
> 	at org.apache.parquet.column.impl.ColumnWriterBase.write(ColumnWriterBase.java:174)
> 	at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.addBinary(MessageColumnIO.java:476)
> 	at org.apache.parquet.io.RecordConsumerLoggingWrapper.addBinary(RecordConsumerLoggingWrapper.java:116)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$DecimalDataWriter.write(DataWritableWriter.java:571)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$GroupDataWriter.write(DataWritableWriter.java:228)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$MessageDataWriter.write(DataWritableWriter.java:251)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:115)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:76)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:35)
> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)
> 	at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:182)
> 	at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:44)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:161)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:174)
> 	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:1160)
> 	at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:888)
> 	at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.internalForward(CommonJoinOperator.java:889)
> 	at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genAllOneUniqueJoinObject(CommonJoinOperator.java:921)
> 	at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:1013)
> 	at org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinObject(CommonMergeJoinOperator.java:419)
> 	at org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinOneGroup(CommonMergeJoinOperator.java:382)
> 	at org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinOneGroup(CommonMergeJoinOperator.java:372)
> 	at org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.joinFinalLeftData(CommonMergeJoinOperator.java:584)
> 	at org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.close(CommonMergeJoinOperator.java:502)
> 	at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.close(ReduceRecordProcessor.java:356)
> 	... 16 more
> ]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:0, Vertex vertex_1671548565336_0001_3_02 [Reducer 2] killed/failed due to:OWN_TASK_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0 retryPossible: false
> {noformat}
> The problem is reproducible in master (7c343471aa68d7d06209694d9d6b181bd58e0793) by runnning 
> {noformat}
> mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=ctas_parquet_join.q -Dtest.output.overwrite
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)