You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Qionghui Zhang (Jira)" <ji...@apache.org> on 2020/07/07 07:52:00 UTC

[jira] [Created] (SPARK-32206) Enable multi-line true could break the read csv in Azure Data Lake Storage gen2

Qionghui Zhang created SPARK-32206:
--------------------------------------

             Summary: Enable multi-line true could break the read csv in Azure Data Lake Storage gen2
                 Key: SPARK-32206
                 URL: https://issues.apache.org/jira/browse/SPARK-32206
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 2.4.5, 2.3.2
            Reporter: Qionghui Zhang


I'm using azure data lake gen2, when I'm loading data frame with certain options: 

var df = spark.read.format("csv")
 .option("ignoreLeadingWhiteSpace", "true")
 .option("ignoreTrailingWhiteSpace", "true")
 .option("parserLib", "UNIVOCITY")
 .option("multiline", "true")
 .option("inferSchema", "true")
 .option("mode", "PERMISSIVE")
 .option("quote", "\"")
 .option("escape", "\"")
 .option("timeStampFormat", "M/d/yyyy H:m:s a")
 .load("abfss://\{containername}@\{storage}.dfs.core.windows.net/\{DirectoryWithoutColon}")
 .limit(1)

It will load data correctly.

 

But if I use \{DirectoryWithColon}, it will thrown error:

java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: Query.csv@snapshot=2020-07-07T07:30:12.1000093Z.

 

Then if I remove .option("multiline", "true"), data can be loaded, but for sure that the dataframe is not handled correctly because there are newline character.

 

So I believe it is a bug.

 

And since our production is running correctly if we enable spark.read.schema(\{SomeSchemaList}).format("csv"), and we want to use inferschema feature on those file path with colon or other special characters, could you help fix this issue?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org