You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Arthur Baudry (JIRA)" <ji...@apache.org> on 2017/10/11 00:14:00 UTC

[jira] [Created] (SPARK-22240) S3 CSV number of partitions incorrectly computed

Arthur Baudry created SPARK-22240:
-------------------------------------

             Summary: S3 CSV number of partitions incorrectly computed
                 Key: SPARK-22240
                 URL: https://issues.apache.org/jira/browse/SPARK-22240
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 2.2.0
         Environment: Running on EMR 5.8.0 with Hadoop 2.7.3 and Spark 2.2.0
            Reporter: Arthur Baudry


Reading CSV out of S3 using S3A protocol does not compute the number of partitions correctly in Spark 2.2.0.

With Spark 2.2.0 I get only partition when loading a 14GB file
{code:java}
scala> val input = spark.read.format("csv").option("header", "true").option("delimiter", "|").option("multiLine", "true").load("s3a://<s3_path>")
input: org.apache.spark.sql.DataFrame = [PARTY_KEY: string, ROW_START_DATE: string ... 36 more fields]

scala> input.rdd.getNumPartitions
res2: Int = 1
{code}

While in Spark 2.0.2 I had:
{code:java}
scala> val input = spark.read.format("csv").option("header", "true").option("delimiter", "|").option("multiLine", "true").load("s3a://<s3_path>")
input: org.apache.spark.sql.DataFrame = [PARTY_KEY: string, ROW_START_DATE: string ... 36 more fields]

scala> input.rdd.getNumPartitions
res2: Int = 115
{code}

This introduces obvious performance issues in Spark 2.2.0. Maybe there is a property that should be set to have the number of partitions computed correctly.

I'm aware that the .option("multiline","true") is not supported in Spark 2.0.2, it's not relevant here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org