You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean R. Owen (Jira)" <ji...@apache.org> on 2023/02/08 21:13:00 UTC

[jira] [Assigned] (SPARK-42335) Pass the comment option through to univocity if users set it explicitly in CSV dataSource

     [ https://issues.apache.org/jira/browse/SPARK-42335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean R. Owen reassigned SPARK-42335:
------------------------------------

    Assignee: Wei Guo

> Pass the comment option through to univocity if users set it explicitly in CSV dataSource
> -----------------------------------------------------------------------------------------
>
>                 Key: SPARK-42335
>                 URL: https://issues.apache.org/jira/browse/SPARK-42335
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0
>            Reporter: Wei Guo
>            Assignee: Wei Guo
>            Priority: Minor
>             Fix For: 3.4.0
>
>         Attachments: image-2023-02-03-18-56-01-596.png, image-2023-02-03-18-56-10-083.png
>
>
> In PR [https://github.com/apache/spark/pull/29516], in order to fix some bugs, univocity-parsers used by CSV dataSource was upgraded from 2.8.3 to 2.9.0, it also involved a new feature of univocity-parsers that quoting values of the first column that start with the comment character. It made a breaking for users downstream that handing a whole row as input.
>  
> For codes:
> {code:java}
> Seq(("#abc", 1)).toDF.write.csv("/Users/guowei/comment_test") {code}
> Before Spark 3.0,the content of output CSV files is shown as:
> !image-2023-02-03-18-56-01-596.png!
> After this change, the content is shown as:
> !image-2023-02-03-18-56-10-083.png!
> For users, they can't set comment option to '\u0000'  to keep the behavior as before because the new added `isCommentSet` check logic as follows:
> {code:java}
> val isCommentSet = this.comment != '\u0000'
> def asWriterSettings: CsvWriterSettings = {
>   // other code
>   if (isCommentSet) {
>     format.setComment(comment)
>   }
>   // other code
> }
>  {code}
> It's better to pass the comment option through to univocity if users set it explicitly in CSV dataSource.
>  
> After this change, the behavior as flows:
> |id|code|2.4 and before|3.0 and after|this update|remark|
> |1|Seq("#abc", "\u0000def", "xyz").toDF()
> .write.{color:#57d9a3}option("comment", "\u0000"){color}.csv(path)|#abc
> *def*
> xyz|{color:#4c9aff}"#abc"{color}
> {color:#4c9aff}*def*{color}
> {color:#4c9aff}xyz{color}|{color:#4c9aff}#abc{color}
> {color:#4c9aff}*"def"*{color}
> {color:#4c9aff}xyz{color}|{color:#4c9aff}this update has a little bit difference with 3.0{color}|
> |2|Seq("#abc", "\u0000def", "xyz").toDF()
> .write{color:#57d9a3}.option("comment", "#"){color}.csv(path)|#abc
> *def*
> xyz|"#abc"
> *def*
> xyz|"#abc"
> *def*
> xyz|the same|
> |3|Seq("#abc", "\u0000def", "xyz").toDF()
> .write.csv(path)|#abc
> *def*
> xyz|"#abc"
> *def*
> xyz|"#abc"
> *def*
> xyz|default behavior: the same|
> |4|{_}Seq{_}("#abc", "\u0000def", "xyz").toDF().write.text(path)
> spark.read.{color:#57d9a3}option("comment", "\u0000"){color}.csv(path)|#abc
> xyz|{color:#4c9aff}#abc{color}
> {color:#4c9aff}\u0000def{color}
> {color:#4c9aff}xyz{color}|{color:#4c9aff}#abc{color}
> {color:#4c9aff}xyz{color}|{color:#4c9aff}this update has a little bit difference with 3.0{color}|
> |5|{_}Seq{_}("#abc", "\u0000def", "xyz").toDF().write.text(path)
> spark.read.{color:#57d9a3}option("comment", "#"){color}.csv(path)|\u0000def
> xyz|\u0000def
> xyz|\u0000def
> xyz|the same|
> |6|{_}Seq{_}("#abc", "\u0000def", "xyz").toDF().write.text(path)
> spark.read.csv(path)|#abc
> xyz|#abc
> \u0000def
> xyz|#abc
> \u0000def
> xyz|default behavior: the same|
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org