You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by gogototo <wa...@gmail.com> on 2014/04/22 05:20:06 UTC

how to solve this problem?

14/04/22 10:43:45 WARN scheduler.TaskSetManager: Loss was due to
java.util.NoSuchElementException
java.util.NoSuchElementException: End of stream
        at org.apache.spark.util.NextIterator.next(NextIterator.scala:83)
        at
org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:29)
        at
org.apache.spark.graphx.impl.RoutingTable$$anonfun$1.apply(RoutingTable.scala:52)
        at
org.apache.spark.graphx.impl.RoutingTable$$anonfun$1.apply(RoutingTable.scala:51)
        at org.apache.spark.rdd.RDD$$anonfun$1.apply(RDD.scala:450)
        at org.apache.spark.rdd.RDD$$anonfun$1.apply(RDD.scala:450)
        at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
        at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:161)
        at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:102)
        at org.apache.spark.scheduler.Task.run(Task.scala:53)
        at
org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
        at
org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/how-to-solve-this-problem-tp4579.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: how to solve this problem?

Posted by Ankur Dave <an...@gmail.com>.
This is a known bug in GraphX, and the fix is in PR #367:
https://github.com/apache/spark/pull/367

Applying that PR should solve the problem.

Ankur <http://www.ankurdave.com/>


On Mon, Apr 21, 2014 at 8:20 PM, gogototo <wa...@gmail.com> wrote:

> java.util.NoSuchElementException: End of stream
>

Re: how to solve this problem?

Posted by Akhil Das <ak...@sigmoidanalytics.com>.
Hi,

Would you mind sharing the piece of code that caused this exception? As per
Javadoc NoSuchElementException is thrown if you call nextElement() method
of Enumeration and there is no more element in Enumeration.



Thanks
Best Regards.


On Tue, Apr 22, 2014 at 8:50 AM, gogototo <wa...@gmail.com> wrote:

> 14/04/22 10:43:45 WARN scheduler.TaskSetManager: Loss was due to
> java.util.NoSuchElementException
> java.util.NoSuchElementException: End of stream
>         at org.apache.spark.util.NextIterator.next(NextIterator.scala:83)
>         at
> org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:29)
>         at
>
> org.apache.spark.graphx.impl.RoutingTable$$anonfun$1.apply(RoutingTable.scala:52)
>         at
>
> org.apache.spark.graphx.impl.RoutingTable$$anonfun$1.apply(RoutingTable.scala:51)
>         at org.apache.spark.rdd.RDD$$anonfun$1.apply(RDD.scala:450)
>         at org.apache.spark.rdd.RDD$$anonfun$1.apply(RDD.scala:450)
>         at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
>         at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:161)
>         at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:102)
>         at org.apache.spark.scheduler.Task.run(Task.scala:53)
>         at
>
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
>         at
> org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-solve-this-problem-tp4579.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>