You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2017/07/04 05:15:02 UTC

[jira] [Commented] (SPARK-21289) Text and CSV formats do not support custom end-of-line delimiters

    [ https://issues.apache.org/jira/browse/SPARK-21289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16073112#comment-16073112 ] 

Hyukjin Kwon commented on SPARK-21289:
--------------------------------------

There are all related information in the JIRA. Initially, SPARK-21098 was a duplicate of this but I suggested to turn to the one that fixes line delimiter.

> Text and CSV formats do not support custom end-of-line delimiters
> -----------------------------------------------------------------
>
>                 Key: SPARK-21289
>                 URL: https://issues.apache.org/jira/browse/SPARK-21289
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.1.1
>            Reporter: Yevgen Galchenko
>            Priority: Minor
>
> Spark csv and text readers always use default CR, LF or CRLF line terminators without an option to configure a custom delimiter.
> Option "textinputformat.record.delimiter" is not being used to set delimiter in HadoopFileLinesReader and can only be set for Hadoop RDD when textFile() is used to read file.
> Possible solution would be to change HadoopFileLinesReader and create LineRecordReader with delimiters specified in configuration. LineRecordReader already supports passing recordDelimiter in its constructor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org