You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Mr.黄 (Jira)" <ji...@apache.org> on 2022/02/18 03:04:00 UTC

[jira] [Commented] (SPARK-31345) Spark fails to write hive parquet table with empty array

    [ https://issues.apache.org/jira/browse/SPARK-31345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17494336#comment-17494336 ] 

Mr.黄 commented on SPARK-31345:
------------------------------

Hello, I did not have this problem in Spark2.4.0-CDH6.3.2 version, but this problem was repeated in Spark2.4.3 version, I do not understand why the lower version succeeded and the higher version failed, I would like to ask whether the fix of this bug does not support the 2.4.3 version? The following is my version information and error message:
{code:java}
spark version: 2.4.0-cdh6.3.2
hive version: 2.1.1-cdh.6.3.2
scala> spark.sql("create table test STORED AS PARQUET as select map() as a")
scala> sql("select * from test").show
+---+                                                                           
|  a|
+---+
| []|
+---+

-----------------------------------------------------------------------------------------------------------------
spark version: 2.4.3
hive version: 3.1.2
scala> spark.sql("create table test STORED AS PARQUET as select map() as a")

Caused by: org.apache.spark.SparkException: Task failed while writing rows.
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
  at org.apache.spark.scheduler.Task.run(Task.scala:121)
  at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Parquet record is malformed: empty fields are illegal, the field should be ommited completely instead
  at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:64)
  at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
  at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
  at parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121)
  at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123)
  at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42)
  at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:111)
  at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:124)
  at org.apache.spark.sql.hive.execution.HiveOutputWriter.write(HiveFileFormat.scala:149)
  at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.write(FileFormatDataWriter.scala:137)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:245)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
  at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:248)
  ... 10 more
Caused by: parquet.io.ParquetEncodingException: empty fields are illegal, the field should be ommited completely instead
  at parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:244)
  at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeMap(DataWritableWriter.java:241)
  at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeValue(DataWritableWriter.java:116)
  at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeGroupFields(DataWritableWriter.java:89)
  at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:60)
  ... 23 more {code}

> Spark fails to write hive parquet table with empty array
> --------------------------------------------------------
>
>                 Key: SPARK-31345
>                 URL: https://issues.apache.org/jira/browse/SPARK-31345
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.2, 2.1.3, 2.2.3, 2.3.4, 2.4.5
>         Environment: - Spark 2.4.5 (compiled against hadoop 2.10.0, and bundled with hadoop 2.10.0 dependencies in `spark.yarn.archive`)
> - Hive 3.1.2
> - Hadoop 3.2.1
>            Reporter: Naitree Zhu
>            Priority: Major
>
> When writing an existing Hive Parquet table using Spark SQL, I encountered an error when writing empty `array()` or `map()`.
> Test case to reproduce:
> {code:java}
> spark.sql("create table test_null (col1 array<int>) stored as parquet")
> val df = spark.sql("select cast(array() as array<int>) as col1")
> df.write.format("hive").mode("append").saveAsTable("default.test_null")
> {code}
> Exception raised:
> {code:java}
> 20/04/04 09:16:03 WARN TaskSetManager: Lost task 0.0 in stage 16.0 (TID 30, test-node, executor 2): org.apache.spark.SparkException: Task failed while writing rows.
> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:257)
> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:177)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:123)
> 	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:411)
> 	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: Parquet record is malformed: empty fields are illegal, the field should be ommited completely instead
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:64)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
> 	at parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121)
> 	at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123)
> 	at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:111)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:124)
> 	at org.apache.spark.sql.hive.execution.HiveOutputWriter.write(HiveFileFormat.scala:149)
> 	at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.write(FileFormatDataWriter.scala:137)
> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:245)
> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:242)
> 	... 9 more
> Caused by: parquet.io.ParquetEncodingException: empty fields are illegal, the field should be ommited completely instead
> 	at parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:244)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeArray(DataWritableWriter.java:186)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeValue(DataWritableWriter.java:113)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeGroupFields(DataWritableWriter.java:89)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:60)
> 	... 21 more
> {code}
> However, letting spark to create the table implicitly, it would succeed.
> {code:java}
> spark.sql("drop table default.test_null")
> df.write.format("parquet").mode("append").saveAsTable("default.test_null")
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org