You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "KIRTI RUGE (Jira)" <ji...@apache.org> on 2023/04/03 07:52:00 UTC
[jira] [Updated] (HIVE-27213) parquet logical decimal type to INT32 is not working while compute statastics
[ https://issues.apache.org/jira/browse/HIVE-27213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
KIRTI RUGE updated HIVE-27213:
------------------------------
Description: test.parquetSteps to reproduce:dfs ${system:test.dfs.mkdir} hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825; dfs -copyFromLocal ../../data/files/dwxtest.parquet hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825; dfs -ls hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825/;CREATE EXTERNAL TABLE `web_sales`( `ws_sold_time_sk` int, `ws_ship_date_sk` int, `ws_item_sk` int, `ws_bill_customer_sk` int, `ws_bill_cdemo_sk` int, `ws_bill_hdemo_sk` int, `ws_bill_addr_sk` int, `ws_ship_customer_sk` int, `ws_ship_cdemo_sk` int, `ws_ship_hdemo_sk` int, `ws_ship_addr_sk` int, `ws_web_page_sk` int, `ws_web_site_sk` int, `ws_ship_mode_sk` int, `ws_warehouse_sk` int, `ws_promo_sk` int, `ws_order_number` bigint, `ws_quantity` int, `ws_wholesale_cost` decimal(7,2), `ws_list_price` decimal(7,2), `ws_sales_price` decimal(7,2), `ws_ext_discount_amt` decimal(7,2), `ws_ext_sales_price` decimal(7,2), `ws_ext_wholesale_cost` decimal(7,2), `ws_ext_list_price` decimal(7,2), `ws_ext_tax` decimal(7,2), `ws_coupon_amt` decimal(7,2), `ws_ext_ship_cost` decimal(7,2), `ws_net_paid` decimal(7,2), `ws_net_paid_inc_tax` decimal(7,2), `ws_net_paid_inc_ship` decimal(7,2), `ws_net_paid_inc_ship_tax` decimal(7,2), `ws_net_profit` decimal(7,2)) PARTITIONED BY ( `ws_sold_date_sk` int) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' STORED AS PARQUET LOCATION 'hdfs:///tmp/dwxtest/';MSCK REPAIR TABLE web_sales;analyze table web_sales compute statistics for columns; Error Stack: analyze table web_sales compute statistics for columns;], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : attempt_1678779198717_0000_2_00_000052_3:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file s3a://xx/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:351) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:280) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:84) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:70) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.base/javax.security.auth.Subject.doAs(Subject.java:423) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:70) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:40) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:118) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.lang.RuntimeException: java.io.IOException: org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file s3a://xxx/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.<init>(TezGroupedSplitsInputFormat.java:145) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111) at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:164) at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83) at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:704) at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:663) at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:150) at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:114) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:543) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:189) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) ... 15 more Caused by: java.io.IOException: org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file s3a://xxx/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97) at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:469) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203) ... 26 more Caused by: org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file s3a://xxx/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:264) at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:84) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:93) at org.apache.hadoop.hive.ql.io.RecordReaderWrapper.create(RecordReaderWrapper.java:72) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:460) ... 27 more Caused by: java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainIntegerDictionary at org.apache.parquet.column.Dictionary.decodeToBinary(Dictionary.java:41) at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$BinaryConverter.setDictionary(ETypeConverter.java:966) at org.apache.parquet.column.impl.ColumnReaderBase.<init>(ColumnReaderBase.java:415) at org.apache.parquet.column.impl.ColumnReaderImpl.<init>(ColumnReaderImpl.java:46) at org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:82) at org.apache.parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:271) at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:147) at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:109) at org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:177) at org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:109) at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:141) at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:230) ... 32 more ]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:195, Vertex vertex_1678779198717_0000_2_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1678779198717_0000_2_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:1009, Vertex vertex_1678779198717_0000_2_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1 (state=08S01,code=2) (was: [^test.parquet]
Steps to reproduce:
dfs ${system:test.dfs.mkdir} hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825;
dfs -copyFromLocal ../../data/files/dwxtest.parquet hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825;
dfs -ls hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825/;
CREATE EXTERNAL TABLE `web_sales`(
`ws_sold_time_sk` int,
`ws_ship_date_sk` int,
`ws_item_sk` int,
`ws_bill_customer_sk` int,
`ws_bill_cdemo_sk` int,
`ws_bill_hdemo_sk` int,
`ws_bill_addr_sk` int,
`ws_ship_customer_sk` int,
`ws_ship_cdemo_sk` int,
`ws_ship_hdemo_sk` int,
`ws_ship_addr_sk` int,
`ws_web_page_sk` int,
`ws_web_site_sk` int,
`ws_ship_mode_sk` int,
`ws_warehouse_sk` int,
`ws_promo_sk` int,
`ws_order_number` bigint,
`ws_quantity` int,
`ws_wholesale_cost` decimal(7,2),
`ws_list_price` decimal(7,2),
`ws_sales_price` decimal(7,2),
`ws_ext_discount_amt` decimal(7,2),
`ws_ext_sales_price` decimal(7,2),
`ws_ext_wholesale_cost` decimal(7,2),
`ws_ext_list_price` decimal(7,2),
`ws_ext_tax` decimal(7,2),
`ws_coupon_amt` decimal(7,2),
`ws_ext_ship_cost` decimal(7,2),
`ws_net_paid` decimal(7,2),
`ws_net_paid_inc_tax` decimal(7,2),
`ws_net_paid_inc_ship` decimal(7,2),
`ws_net_paid_inc_ship_tax` decimal(7,2),
`ws_net_profit` decimal(7,2))
PARTITIONED BY (
`ws_sold_date_sk` int)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS PARQUET LOCATION 'hdfs:///tmp/dwxtest/';
MSCK REPAIR TABLE web_sales;
analyze table web_sales compute statistics for columns;
Error Stack:
{noformat}
analyze table web_sales compute statistics for columns;
], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : attempt_1678779198717_0000_2_00_000052_3:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file s3a://nfqe-tpcds-test/spark-tpcds/sf1000-parquet/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:351)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:280)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:84)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:70)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:70)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:40)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:118)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.RuntimeException: java.io.IOException: org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file s3a://nfqe-tpcds-test/spark-tpcds/sf1000-parquet/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.<init>(TezGroupedSplitsInputFormat.java:145)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111)
at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:164)
at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83)
at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:704)
at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:663)
at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:150)
at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:114)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:543)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:189)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296)
... 15 more
Caused by: java.io.IOException: org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file s3a://nfqe-tpcds-test/spark-tpcds/sf1000-parquet/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:469)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)
... 26 more
Caused by: org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file s3a://nfqe-tpcds-test/spark-tpcds/sf1000-parquet/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet
at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:264)
at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:84)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:93)
at org.apache.hadoop.hive.ql.io.RecordReaderWrapper.create(RecordReaderWrapper.java:72)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:460)
... 27 more
Caused by: java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainIntegerDictionary
at org.apache.parquet.column.Dictionary.decodeToBinary(Dictionary.java:41)
at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$BinaryConverter.setDictionary(ETypeConverter.java:966)
at org.apache.parquet.column.impl.ColumnReaderBase.<init>(ColumnReaderBase.java:415)
at org.apache.parquet.column.impl.ColumnReaderImpl.<init>(ColumnReaderImpl.java:46)
at org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:82)
at org.apache.parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:271)
at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:147)
at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:109)
at org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:177)
at org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:109)
at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:141)
at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:230)
... 32 more
]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:195, Vertex vertex_1678779198717_0000_2_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1678779198717_0000_2_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:1009, Vertex vertex_1678779198717_0000_2_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1 (state=08S01,code=2)
{noformat})
> parquet logical decimal type to INT32 is not working while compute statastics
> -----------------------------------------------------------------------------
>
> Key: HIVE-27213
> URL: https://issues.apache.org/jira/browse/HIVE-27213
> Project: Hive
> Issue Type: Improvement
> Reporter: KIRTI RUGE
> Assignee: KIRTI RUGE
> Priority: Major
> Attachments: test.parquet
>
>
> test.parquetSteps to reproduce:dfs ${system:test.dfs.mkdir} hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825; dfs -copyFromLocal ../../data/files/dwxtest.parquet hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825; dfs -ls hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825/;CREATE EXTERNAL TABLE `web_sales`( `ws_sold_time_sk` int, `ws_ship_date_sk` int, `ws_item_sk` int, `ws_bill_customer_sk` int, `ws_bill_cdemo_sk` int, `ws_bill_hdemo_sk` int, `ws_bill_addr_sk` int, `ws_ship_customer_sk` int, `ws_ship_cdemo_sk` int, `ws_ship_hdemo_sk` int, `ws_ship_addr_sk` int, `ws_web_page_sk` int, `ws_web_site_sk` int, `ws_ship_mode_sk` int, `ws_warehouse_sk` int, `ws_promo_sk` int, `ws_order_number` bigint, `ws_quantity` int, `ws_wholesale_cost` decimal(7,2), `ws_list_price` decimal(7,2), `ws_sales_price` decimal(7,2), `ws_ext_discount_amt` decimal(7,2), `ws_ext_sales_price` decimal(7,2), `ws_ext_wholesale_cost` decimal(7,2), `ws_ext_list_price` decimal(7,2), `ws_ext_tax` decimal(7,2), `ws_coupon_amt` decimal(7,2), `ws_ext_ship_cost` decimal(7,2), `ws_net_paid` decimal(7,2), `ws_net_paid_inc_tax` decimal(7,2), `ws_net_paid_inc_ship` decimal(7,2), `ws_net_paid_inc_ship_tax` decimal(7,2), `ws_net_profit` decimal(7,2)) PARTITIONED BY ( `ws_sold_date_sk` int) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' STORED AS PARQUET LOCATION 'hdfs:///tmp/dwxtest/';MSCK REPAIR TABLE web_sales;analyze table web_sales compute statistics for columns; Error Stack: analyze table web_sales compute statistics for columns;], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : attempt_1678779198717_0000_2_00_000052_3:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file s3a://xx/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:351) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:280) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:84) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:70) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.base/javax.security.auth.Subject.doAs(Subject.java:423) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:70) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:40) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:118) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.lang.RuntimeException: java.io.IOException: org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file s3a://xxx/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.<init>(TezGroupedSplitsInputFormat.java:145) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111) at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:164) at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83) at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:704) at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:663) at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:150) at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:114) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:543) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:189) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) ... 15 more Caused by: java.io.IOException: org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file s3a://xxx/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97) at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:469) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203) ... 26 more Caused by: org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file s3a://xxx/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:264) at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:84) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:93) at org.apache.hadoop.hive.ql.io.RecordReaderWrapper.create(RecordReaderWrapper.java:72) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:460) ... 27 more Caused by: java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainIntegerDictionary at org.apache.parquet.column.Dictionary.decodeToBinary(Dictionary.java:41) at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$BinaryConverter.setDictionary(ETypeConverter.java:966) at org.apache.parquet.column.impl.ColumnReaderBase.<init>(ColumnReaderBase.java:415) at org.apache.parquet.column.impl.ColumnReaderImpl.<init>(ColumnReaderImpl.java:46) at org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:82) at org.apache.parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:271) at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:147) at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:109) at org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:177) at org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:109) at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:141) at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:230) ... 32 more ]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:195, Vertex vertex_1678779198717_0000_2_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1678779198717_0000_2_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:1009, Vertex vertex_1678779198717_0000_2_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1 (state=08S01,code=2)
--
This message was sent by Atlassian Jira
(v8.20.10#820010)