You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "liuxian (JIRA)" <ji...@apache.org> on 2018/10/17 01:02:00 UTC

[jira] [Created] (SPARK-25753) binaryFiles broken for small files

liuxian created SPARK-25753:
-------------------------------

             Summary: binaryFiles broken for small files
                 Key: SPARK-25753
                 URL: https://issues.apache.org/jira/browse/SPARK-25753
             Project: Spark
          Issue Type: Bug
          Components: Input/Output
    Affects Versions: 3.0.0
            Reporter: liuxian


{{StreamFileInputFormat}} and {{WholeTextFileInputFormat(https://issues.apache.org/jira/browse/SPARK-24610)}} have the same problem: for small sized files, the computed maxSplitSize by `{{StreamFileInputFormat}} `  is way smaller than the default or commonly used split size of 64/128M and spark throws an exception while trying to read them.

{{Exception info:Minimum split size pernode 5123456 cannot be larger than maximum split size 4194304 java.io.IOException: Minimum split size pernode 5123456 cannot be larger than maximum split size 4194304 at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java: 201) at org.apache.spark.rdd.BinaryFileRDD.getPartitions(BinaryFileRDD.scala:52) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:254) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:252) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2138)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org