You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2018/10/17 08:51:00 UTC

[jira] [Commented] (SPARK-25739) Double quote coming in as empty value even when emptyValue set as null

    [ https://issues.apache.org/jira/browse/SPARK-25739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16653203#comment-16653203 ] 

Hyukjin Kwon commented on SPARK-25739:
--------------------------------------

So this is fixed in Spark 2.4, right? That option is added from Spark 2.4. See SPARK-25241

> Double quote coming in as empty value even when emptyValue set as null
> ----------------------------------------------------------------------
>
>                 Key: SPARK-25739
>                 URL: https://issues.apache.org/jira/browse/SPARK-25739
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.3.1
>         Environment:  Databricks - 4.2 (includes Apache Spark 2.3.1, Scala 2.11) 
>            Reporter: Brian Jones
>            Priority: Major
>
>  Example code - 
> {code:java}
> val df = List((1,""),(2,"hello"),(3,"hi"),(4,null)).toDF("key","value")
> df
> .repartition(1)
> .write
> .mode("overwrite")
> .option("nullValue", null)
> .option("emptyValue", null)
> .option("delimiter",",")
> .option("quoteMode", "NONE")
> .option("escape","\\")
> .format("csv")
> .save("/tmp/nullcsv/")
> var out = dbutils.fs.ls("/tmp/nullcsv/")
> var file = out(out.size - 1)
> val x = dbutils.fs.head("/tmp/nullcsv/" + file.name)
> println(x)
> {code}
> Output - 
> {code:java}
> 1,""
> 3,hi
> 2,hello
> 4,
> {code}
> Expected output - 
> {code:java}
> 1,
> 3,hi
> 2,hello
> 4,
> {code}
>  
> [https://github.com/apache/spark/commit/b7efca7ece484ee85091b1b50bbc84ad779f9bfe] This commit is relevant to my issue.
> "Since Spark 2.4, empty strings are saved as quoted empty strings `""`. In version 2.3 and earlier, empty strings are equal to `null` values and do not reflect to any characters in saved CSV files."
> I am on Spark version 2.3.1, so empty strings should be coming as null.  Even then, I am passing the correct "emptyValue" option.  However, my empty values are stilling coming as `""` in the written file.
>  
> I have tested the provided code in Databricks runtime environment 5.0 and 4.1, and it is giving the expected output.   However in Databricks runtime 4.2 and 4.3 (which are running spark 2.3.1) we get the incorrect output.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org