You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "shivusondur (JIRA)" <ji...@apache.org> on 2018/09/07 05:29:00 UTC

[jira] [Comment Edited] (SPARK-25271) Creating parquet table with all the column null throws exception

    [ https://issues.apache.org/jira/browse/SPARK-25271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606693#comment-16606693 ] 

shivusondur edited comment on SPARK-25271 at 9/7/18 5:28 AM:
-------------------------------------------------------------

As [~S71955] told, The Behaviour changed from the [https://github.com/apache/spark/pull/20521]

While debugging "spark.sql("create table vp_reader STORED AS PARQUET as select * from vp_reader_temp")"    I found following  details.
 
{code:java}
In Spark-2.2.1 It is using the "InsertIntoTable"(org.apache.spark.sql.hive.execution.createHiveTableAsSelectCommand.run()) which will use ParquetFileFormat as shown below snaps{code}

1 Figure: It uses  "InsertIntoTable" for plan generation

!image-2018-09-07-09-29-33-370.png!       

2 Figure: It is using the "ParquetFileFormat"  as fileformat

!image-2018-09-07-09-29-52-899.png! 

{code:java}
But in Spark-2.3.1, It is using the "InsertIntoHiveTable", (org.apache.spark.sql.hive.execution.createHiveTableAsSelectCommand.run()) which will use HiveFileFormat as shown below snap's{code}
  
3 Figure: It uses "InsertIntoHiveTable" for plan generation
!image-2018-09-07-09-32-43-892.png! 


4 Figure: It is using the "HiveFileFormat" as fileformat
!image-2018-09-07-09-33-03-095.png!

cc [~hyukjin.kwon] Let me know any further clarification

 


was (Author: shivusondur@gmail.com):
As [~S71955] told, The Behaviour changed form above [https://github.com/apache/spark/pull/20521]

After debugging "spark.sql("create table vp_reader STORED AS PARQUET as select * from vp_reader_temp")"          i found following  details.

 
{code:java}
In Spark-2.2.1 It is using the "InsertIntoTable"(org.apache.spark.sql.hive.execution.createHiveTableAsSelectCommand.run()) which will use ParquetFileFormat as shown below snap's{code}
 

!image-2018-09-07-09-29-33-370.png!

!image-2018-09-07-09-29-52-899.png!

 
{code:java}
But in Spark-2.3.1, It is using the "InsertIntoHiveTable", (org.apache.spark.sql.hive.execution.createHiveTableAsSelectCommand.run()) which will use HiveFileFormat as shown below snap's{code}
 

 

!image-2018-09-07-09-32-43-892.png!

!image-2018-09-07-09-33-03-095.png!

I will further analyse and update.

> Creating parquet table with all the column null throws exception
> ----------------------------------------------------------------
>
>                 Key: SPARK-25271
>                 URL: https://issues.apache.org/jira/browse/SPARK-25271
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.3.1
>            Reporter: shivusondur
>            Priority: Major
>         Attachments: image-2018-09-07-09-12-34-944.png, image-2018-09-07-09-29-33-370.png, image-2018-09-07-09-29-52-899.png, image-2018-09-07-09-32-43-892.png, image-2018-09-07-09-33-03-095.png
>
>
> {code:java}
>  1)cat /data/parquet.dat
> 1$abc2$pqr:3$xyz
> null{code}
>  
> {code:java}
> 2)spark.sql("create table vp_reader_temp (projects map<int, string>) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' COLLECTION ITEMS TERMINATED BY ':' MAP KEYS TERMINATED BY '$'")
> {code}
> {code:java}
> 3)spark.sql("
> LOAD DATA LOCAL INPATH '/data/parquet.dat' INTO TABLE vp_reader_temp")
> {code}
> {code:java}
> 4)spark.sql("create table vp_reader STORED AS PARQUET as select * from vp_reader_temp")
> {code}
> *Result :* Throwing exception (Working fine with spark 2.2.1)
> {code:java}
> java.lang.RuntimeException: Parquet record is malformed: empty fields are illegal, the field should be ommited completely instead
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:64)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
> 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:123)
> 	at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:180)
> 	at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:46)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:112)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:125)
> 	at org.apache.spark.sql.hive.execution.HiveOutputWriter.write(HiveFileFormat.scala:149)
> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:406)
> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:283)
> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:281)
> 	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1438)
> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:286)
> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:211)
> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:210)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:109)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:349)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.parquet.io.ParquetEncodingException: empty fields are illegal, the field should be ommited completely instead
> 	at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:320)
> 	at org.apache.parquet.io.RecordConsumerLoggingWrapper.endField(RecordConsumerLoggingWrapper.java:165)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeMap(DataWritableWriter.java:241)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeValue(DataWritableWriter.java:116)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeGroupFields(DataWritableWriter.java:89)
> 	at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:60)
> 	... 21 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org