You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2017/02/28 02:24:45 UTC

[jira] [Commented] (SPARK-19761) create InMemoryFileIndex with empty rootPaths when set PARALLEL_PARTITION_DISCOVERY_THRESHOLD to zero

    [ https://issues.apache.org/jira/browse/SPARK-19761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15887095#comment-15887095 ] 

Apache Spark commented on SPARK-19761:
--------------------------------------

User 'windpiger' has created a pull request for this issue:
https://github.com/apache/spark/pull/17093

> create InMemoryFileIndex with empty rootPaths when set PARALLEL_PARTITION_DISCOVERY_THRESHOLD to zero
> -----------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-19761
>                 URL: https://issues.apache.org/jira/browse/SPARK-19761
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.2.0
>            Reporter: Song Jun
>
> if we create a InMemoryFileIndex with an empty rootPaths when set PARALLEL_PARTITION_DISCOVERY_THRESHOLD to zero, it will throw an  exception:
> {code}
> Positive number of slices required
> java.lang.IllegalArgumentException: Positive number of slices required
>         at org.apache.spark.rdd.ParallelCollectionRDD$.slice(ParallelCollectionRDD.scala:119)
>         at org.apache.spark.rdd.ParallelCollectionRDD.getPartitions(ParallelCollectionRDD.scala:97)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
>         at scala.Option.getOrElse(Option.scala:121)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
>         at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
>         at scala.Option.getOrElse(Option.scala:121)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
>         at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
>         at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
>         at scala.Option.getOrElse(Option.scala:121)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:2084)
>         at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>         at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>         at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>         at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
>         at org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex$.org$apache$spark$sql$execution$datasources$PartitioningAwareFileIndex$$bulkListLeafFiles(PartitioningAwareFileIndex.scala:357)
>         at org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex.listLeafFiles(PartitioningAwareFileIndex.scala:256)
>         at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.refresh0(InMemoryFileIndex.scala:74)
>         at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.<init>(InMemoryFileIndex.scala:50)
>         at org.apache.spark.sql.execution.datasources.FileIndexSuite$$anonfun$9$$anonfun$apply$mcV$sp$2.apply$mcV$sp(FileIndexSuite.scala:186)
>         at org.apache.spark.sql.test.SQLTestUtils$class.withSQLConf(SQLTestUtils.scala:105)
>         at org.apache.spark.sql.execution.datasources.FileIndexSuite.withSQLConf(FileIndexSuite.scala:33)
>         at org.apache.spark.sql.execution.datasources.FileIndexSuite$$anonfun$9.apply$mcV$sp(FileIndexSuite.scala:185)
>         at org.apache.spark.sql.execution.datasources.FileIndexSuite$$anonfun$9.apply(FileIndexSuite.scala:185)
>         at org.apache.spark.sql.execution.datasources.FileIndexSuite$$anonfun$9.apply(FileIndexSuite.scala:185)
>         at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
>         at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org