You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Kan Zhang (JIRA)" <ji...@apache.org> on 2014/05/05 03:40:17 UTC

[jira] [Assigned] (SPARK-1690) RDD.saveAsTextFile throws scala.MatchError if RDD contains empty elements

     [ https://issues.apache.org/jira/browse/SPARK-1690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Kan Zhang reassigned SPARK-1690:
--------------------------------

    Assignee: Kan Zhang

> RDD.saveAsTextFile throws scala.MatchError if RDD contains empty elements
> -------------------------------------------------------------------------
>
>                 Key: SPARK-1690
>                 URL: https://issues.apache.org/jira/browse/SPARK-1690
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 0.9.0
>         Environment: Linux/CentOS6, Spark 0.9.1, standalone mode against HDFS from Hadoop 1.2.1
>            Reporter: Glenn K. Lockwood
>            Assignee: Kan Zhang
>            Priority: Minor
>
> The following pyspark code fails with a scala.MatchError exception if sample.txt contains any empty lines:
> file = sc.textFile('hdfs://gcn-3-45.ibnet0:54310/user/glock/sample.txt')
> file.saveAsTextFile('hdfs://gcn-3-45.ibnet0:54310/user/glock/sample.out')
> The resulting stack trace:
> 14/04/30 17:02:46 WARN scheduler.TaskSetManager: Lost TID 0 (task 0.0:0)
> 14/04/30 17:02:46 WARN scheduler.TaskSetManager: Loss was due to scala.MatchError
> scala.MatchError: 0 (of class java.lang.Integer)
> 	at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:129)
> 	at org.apache.spark.api.python.PythonRDD$$anon$1.next(PythonRDD.scala:119)
> 	at org.apache.spark.api.python.PythonRDD$$anon$1.next(PythonRDD.scala:112)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> 	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> 	at org.apache.spark.rdd.PairRDDFunctions.org$apache$spark$rdd$PairRDDFunctions$$writeToFile$1(PairRDDFunctions.scala:732)
> 	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$2.apply(PairRDDFunctions.scala:741)
> 	at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$2.apply(PairRDDFunctions.scala:741)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:53)
> 	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:211)
> 	at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:42)
> 	at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:41)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> 	at org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:41)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:176)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:722)
> This can be reproduced with a sample.txt containing
> """
> foo
> bar
> """
> and disappears if sample.txt is
> """
> foo
> bar
> """



--
This message was sent by Atlassian JIRA
(v6.2#6252)