You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:35:32 UTC

[jira] [Resolved] (SPARK-15374) Spark created Parquet files cause NPE when a column has only NULL values

     [ https://issues.apache.org/jira/browse/SPARK-15374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-15374.
----------------------------------
    Resolution: Incomplete

> Spark created Parquet files cause NPE when a column has only NULL values
> ------------------------------------------------------------------------
>
>                 Key: SPARK-15374
>                 URL: https://issues.apache.org/jira/browse/SPARK-15374
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.6.0, 1.6.1
>         Environment: AWS EMR running Spark 1.6.1
>            Reporter: Euan de Kock
>            Priority: Major
>              Labels: bulk-closed
>
> When an external table is built from Spark, and is subsequently accessed by Hive it will generate an NPE error if one of the columns contains only null values. Spark (and Presto) can successfully read this data, but Hive cannot. If the same dataset is created by Hive, it is readable by all systems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org