You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jork Zijlstra (JIRA)" <ji...@apache.org> on 2016/11/29 09:04:58 UTC

[jira] [Comment Edited] (SPARK-17916) CSV data source treats empty string as null no matter what nullValue option is

    [ https://issues.apache.org/jira/browse/SPARK-17916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15704736#comment-15704736 ] 

Jork Zijlstra edited comment on SPARK-17916 at 11/29/16 9:04 AM:
-----------------------------------------------------------------

I also have the same issue in 2.0.1. This code seems to be the problem:

```private def rowToString(row: InternalRow): Seq[String] = {
    var i = 0
    val values = new Array[String](row.numFields)
    while (i < row.numFields) {
      if (!row.isNullAt(i)) {
        values(i) = valueConverters(i).apply(row, i)
      } else {
        values(i) = params.nullValue
      }
      i += 1
    }
    values
  }


def castTo(
      datum: String,
      castType: DataType,
      nullable: Boolean = true,
      options: CSVOptions = CSVOptions()): Any = {

    if (nullable && datum == options.nullValue) {
      null
    } else {

}```

So first the missing value in the data in transformed into the nullValue. Then in the castTo the value is checked against the nullValue, which is always true for a missing value. 


was (Author: jzijlstra):
I also have the same issue in 2.0.1. This code seems to be the problem:

private def rowToString(row: InternalRow): Seq[String] = {
    var i = 0
    val values = new Array[String](row.numFields)
    while (i < row.numFields) {
      if (!row.isNullAt(i)) {
        values(i) = valueConverters(i).apply(row, i)
      } else {
        values(i) = params.nullValue
      }
      i += 1
    }
    values
  }


def castTo(
      datum: String,
      castType: DataType,
      nullable: Boolean = true,
      options: CSVOptions = CSVOptions()): Any = {

    if (nullable && datum == options.nullValue) {
      null
    } else {

}

So first the missing value in the data in transformed into the nullValue. Then in the castTo the value is checked against the nullValue, which is always true for a missing value. 

> CSV data source treats empty string as null no matter what nullValue option is
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-17916
>                 URL: https://issues.apache.org/jira/browse/SPARK-17916
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.1
>            Reporter: Hossein Falaki
>
> When user configures {{nullValue}} in CSV data source, in addition to those values, all empty string values are also converted to null.
> {code}
> data:
> col1,col2
> 1,"-"
> 2,""
> {code}
> {code}
> spark.read.format("csv").option("nullValue", "-")
> {code}
> We will find a null in both rows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org