You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/04/01 01:00:52 UTC

[jira] [Commented] (SPARK-24540) Support for multiple delimiter in Spark CSV read

    [ https://issues.apache.org/jira/browse/SPARK-24540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16806316#comment-16806316 ] 

Hyukjin Kwon commented on SPARK-24540:
--------------------------------------

What you see is what you get. It's blocked by SPARK-17967 

> Support for multiple delimiter in Spark CSV read
> ------------------------------------------------
>
>                 Key: SPARK-24540
>                 URL: https://issues.apache.org/jira/browse/SPARK-24540
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.3.1
>            Reporter: Ashwin K
>            Priority: Major
>
> Currently, the delimiter option Spark 2.0 to read and split CSV files/data only support a single character delimiter. If we try to provide multiple delimiters, we observer the following error message.
> eg: Dataset<Row> df = spark.read().option("inferSchema", "true")
>                                                           .option("header", "false")
>                                                          .option("delimiter", ", ")
>                                                           .csv("C:\test.txt");
> Exception in thread "main" java.lang.IllegalArgumentException: Delimiter cannot be more than one character: , 
> at org.apache.spark.sql.execution.datasources.csv.CSVUtils$.toChar(CSVUtils.scala:111)
>  at org.apache.spark.sql.execution.datasources.csv.CSVOptions.<init>(CSVOptions.scala:83)
>  at org.apache.spark.sql.execution.datasources.csv.CSVOptions.<init>(CSVOptions.scala:39)
>  at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.inferSchema(CSVFileFormat.scala:55)
>  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:202)
>  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:202)
>  at scala.Option.orElse(Option.scala:289)
>  at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:201)
>  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:392)
>  at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
>  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
>  at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:596)
>  at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:473)
>  
> Generally, the data to be processed contains multiple delimiters and presently we need to do a manual data clean up on the source/input file, which doesn't work well in large applications which consumes numerous files.
> There seems to be work-around like reading data as text and using the split option, but this in my opinion defeats the purpose, advantage and efficiency of a direct read from CSV file.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org