You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2017/03/14 16:23:41 UTC

[jira] [Commented] (SPARK-19950) nullable ignored when df.load() is executed for file-based data source

    [ https://issues.apache.org/jira/browse/SPARK-19950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15924509#comment-15924509 ] 

Apache Spark commented on SPARK-19950:
--------------------------------------

User 'kiszk' has created a pull request for this issue:
https://github.com/apache/spark/pull/17293

> nullable ignored when df.load() is executed for file-based data source
> ----------------------------------------------------------------------
>
>                 Key: SPARK-19950
>                 URL: https://issues.apache.org/jira/browse/SPARK-19950
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.2.0
>            Reporter: Kazuaki Ishizaki
>
> This problem is reported in [Databricks forum|https://forums.databricks.com/questions/7123/nullable-seemingly-ignored-when-reading-parquet.html].
> When we execute the following code, a schema for "id" in {{dfRead}} has {{nullable = true}}. It should be {{nullable = false}}.
> {code:java}
> val field = "id"
> val df = spark.range(0, 5, 1, 1).toDF(field)
> val fmt = "parquet"
> val path = "/tmp/parquet"
> val schema = StructType(Seq(StructField(field, LongType, false)))
> df.write.format(fmt).mode("overwrite").save(path)
> val dfRead = spark.read.format(fmt).schema(schema).load(path)
> dfRead.printSchema
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org