You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Mr.黄 (Jira)" <ji...@apache.org> on 2022/02/18 03:03:00 UTC

[jira] [Commented] (HIVE-11625) Map instances with null keys are not properly handled for Parquet tables

    [ https://issues.apache.org/jira/browse/HIVE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17494334#comment-17494334 ] 

Mr.黄 commented on HIVE-11625:
-----------------------------

Hello, I did not have this problem in Hive 2.1.1 version, but this problem reappeared in Hive 3.1.2 version, I do not understand why the lower version succeeded and the higher version failed, may I ask whether this bug will be fixed later? The following is my version information and error message:
{code:java}
spark version: 2.4.0-cdh6.3.2
hive version: 2.1.1-cdh.6.3.2
scala> spark.sql("create table test STORED AS PARQUET as select map() as a")
scala> sql("select * from test").show
+---+                                                                           
|  a|
+---+
| []|
+---+

-----------------------------------------------------------------------------------------------------------------
spark version: 2.4.3
hive version: 3.1.2
scala> spark.sql("create table test STORED AS PARQUET as select map() as a")

Caused by: org.apache.spark.SparkException: Task failed while writing rows.
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
  at org.apache.spark.scheduler.Task.run(Task.scala:121)
  at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Parquet record is malformed: empty fields are illegal, the field should be ommited completely instead
  at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:64)
  at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
  at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
  at parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121)
  at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123)
  at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42)
  at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:111)
  at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:124)
  at org.apache.spark.sql.hive.execution.HiveOutputWriter.write(HiveFileFormat.scala:149)
  at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.write(FileFormatDataWriter.scala:137)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:245)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
  at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:248)
  ... 10 more
Caused by: parquet.io.ParquetEncodingException: empty fields are illegal, the field should be ommited completely instead
  at parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:244)
  at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeMap(DataWritableWriter.java:241)
  at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeValue(DataWritableWriter.java:116)
  at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeGroupFields(DataWritableWriter.java:89)
  at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:60)
  ... 23 more {code}

> Map instances with null keys are not properly handled for Parquet tables
> ------------------------------------------------------------------------
>
>                 Key: HIVE-11625
>                 URL: https://issues.apache.org/jira/browse/HIVE-11625
>             Project: Hive
>          Issue Type: Sub-task
>    Affects Versions: 0.13.1, 0.14.0, 1.0.1, 1.1.1, 1.2.1
>            Reporter: Cheng Lian
>            Priority: Major
>
> Hive allows maps with null keys:
> {code:sql}
> hive> select map(null, 'foo', 1, 'bar', null, 'baz');
> {null:"baz",1:"bar"}
> {code}
> However, when written into Parquet tables, map entries with null as keys are either dropped or cause exceptions. Below is the result of Hive 0.14.0 and 0.13.1:
> {code:sql}
> hive> CREATE TABLE map_test STORED AS PARQUET
>     > AS SELECT MAP(null, 'foo', 1, 'bar', null, 'baz');
> ...
> hive> SELECT * from map_test;
> {1:"bar"}
> {code}
> And Hive 1.2.1 throws exception:
> {noformat}
> java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing writable (null)
> 	at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:172)
> 	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
> 	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
> 	at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> 	at org.apache.hadoop.mapred.Child.main(Child.java:249)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing writable (null)
> 	at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:516)
> 	at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163)
> 	... 8 more
> Caused by: java.lang.RuntimeException: Parquet record is malformed: empty fields are illegal, the field should be ommited completely instead
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:64)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
> 	at parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121)
> 	at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123)
> 	at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:111)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:124)
> 	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:753)
> 	at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
> 	at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
> 	at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
> 	at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:97)
> 	at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:162)
> 	at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:508)
> 	... 9 more
> Caused by: parquet.io.ParquetEncodingException: empty fields are illegal, the field should be ommited completely instead
> 	at parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:244)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeMap(DataWritableWriter.java:228)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeValue(DataWritableWriter.java:116)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeGroupFields(DataWritableWriter.java:89)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:60)
> 	... 23 more
> java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing writable (null)
> 	at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:172)
> 	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
> 	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
> 	at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
> 	at org.apache.hadoop.mapred.Child.main(Child.java:249)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing writable (null)
> 	at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:516)
> 	at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163)
> 	... 8 more
> Caused by: java.lang.RuntimeException: Parquet record is malformed: empty fields are illegal, the field should be ommited completely instead
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:64)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
> 	at parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121)
> 	at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123)
> 	at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:111)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:124)
> 	at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:753)
> 	at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
> 	at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
> 	at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
> 	at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:97)
> 	at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:162)
> 	at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:508)
> 	... 9 more
> Caused by: parquet.io.ParquetEncodingException: empty fields are illegal, the field should be ommited completely instead
> 	at parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:244)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeMap(DataWritableWriter.java:228)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeValue(DataWritableWriter.java:116)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeGroupFields(DataWritableWriter.java:89)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:60)
> 	... 23 more
> {noformat}
> The problematic method is [{{DataWritableWriter.writeMap()}}|https://github.com/apache/hive/blob/release-1.2.1/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriter.java#L223-L237].  Although the key value entry is not null, either key or value can be null. And null keys are not properly handled.
> According to parquet-format spec, keys of a Parquet {{MAP}} must not be null.  Then I think the problem here is that, whether should we silently ignore null keys when writing a map to a Parquet table like what Hive 0.14.0 does, or throw an exception (probably a more descriptive one instead of the one mentioned in the ticket description) like Hive 1.2.1.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)