You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2019/02/23 20:02:00 UTC

[jira] [Resolved] (SPARK-26935) Skip DataFrameReader's CSV first line scan when not used

     [ https://issues.apache.org/jira/browse/SPARK-26935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-26935.
-------------------------------
       Resolution: Fixed
    Fix Version/s: 3.0.0

Issue resolved by pull request 23830
[https://github.com/apache/spark/pull/23830]

> Skip DataFrameReader's CSV first line scan when not used
> --------------------------------------------------------
>
>                 Key: SPARK-26935
>                 URL: https://issues.apache.org/jira/browse/SPARK-26935
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.4.0
>         Environment: Spark Version 2.4.0
> Linux kernel 4.9.0-8-amd64 on Debian 9
> Docker container on Google Kubernetes Engine VM (n1-standard-8)
>  
>            Reporter: Douglas Colkitt
>            Assignee: Douglas Colkitt
>            Priority: Minor
>              Labels: easyfix, performance, pull-request-available
>             Fix For: 3.0.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> DataFrameReader always collects the first line from the first partition on every csv() call. This is used for schema inference and header filtering. But if schema is pre-specified by the user and header is false, there's no point.
> For certain cases CSV dataset generation is expensive and slow, and this clogs up the entire executor with 1 job to grab 1 line. If it's not needed, skipping the first line step is a free lunch for runtime and efficiency. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org