You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Kevin Jung <it...@samsung.com> on 2014/07/16 04:44:21 UTC

Spark 1.0.1 akka connection refused

Hi,
I recently upgrade my spark 1.0.0 cluster to 1.0.1.
But it gives me "ERROR remote.EndpointWriter: AssociationError" when I run
simple SparkSQL job in spark-shell.

here is code,
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext._
case class Person(name:String, Age:Int, Gender:String, Birth:String)
val peopleRDD = sc.textFile("/sample/sample.csv").map(_.split(",")).map(p =>
Person(p(0).toString, p(1).toInt, p(2).toString, p(3).toString))
peopleRDD.collect().foreach(println)

and here is error message on worker node.
14/07/16 10:58:04 ERROR remote.EndpointWriter: AssociationError
[akka.tcp://sparkWorker@my.test6.com:38030] ->
[akka.tcp://sparkExecutor@my.test6.com:35534]: Error [Association failed
with [akka.tcp://sparkExecutor@my.tes$
akka.remote.EndpointAssociationException: Association failed with
[akka.tcp://sparkExecutor@my.test6.com:35534]
Caused by:
akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2:
Connection
refused: my.test6.com/

and here is error message in shell 
14/07/16 11:33:15 WARN scheduler.TaskSetManager: Loss was due to
java.lang.NoClassDefFoundError
java.lang.NoClassDefFoundError: Could not initialize class $line10.$read$
        at
$line14.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19)
        at
$line14.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
        at
scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
        at scala.collection.AbstractIterator.to(Iterator.scala:1157)
        at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
        at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
        at
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
        at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
        at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
        at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
        at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1083)
        at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1083)
        at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
        at org.apache.spark.scheduler.Task.run(Task.scala:51)
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:722)

I tested it on spark 1.0.0(same machine) and it was fine.
It seems like Worker cannot find Executor akka endpoint.
Do you have any ideas?

Best regards
Kevin



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-0-1-akka-connection-refused-tp9864.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: Spark 1.0.1 akka connection refused

Posted by Walrus theCat <wa...@gmail.com>.
I'm getting similar errors on spark streaming -- but at this point in my
project I don't need a cluster and can develop locally.  Will write it up
later, though, if it persists.


On Tue, Jul 15, 2014 at 7:44 PM, Kevin Jung <it...@samsung.com> wrote:

> Hi,
> I recently upgrade my spark 1.0.0 cluster to 1.0.1.
> But it gives me "ERROR remote.EndpointWriter: AssociationError" when I run
> simple SparkSQL job in spark-shell.
>
> here is code,
> val sqlContext = new org.apache.spark.sql.SQLContext(sc)
> import sqlContext._
> case class Person(name:String, Age:Int, Gender:String, Birth:String)
> val peopleRDD = sc.textFile("/sample/sample.csv").map(_.split(",")).map(p
> =>
> Person(p(0).toString, p(1).toInt, p(2).toString, p(3).toString))
> peopleRDD.collect().foreach(println)
>
> and here is error message on worker node.
> 14/07/16 10:58:04 ERROR remote.EndpointWriter: AssociationError
> [akka.tcp://sparkWorker@my.test6.com:38030] ->
> [akka.tcp://sparkExecutor@my.test6.com:35534]: Error [Association failed
> with [akka.tcp://sparkExecutor@my.tes$
> akka.remote.EndpointAssociationException: Association failed with
> [akka.tcp://sparkExecutor@my.test6.com:35534]
> Caused by:
> akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2:
> Connection
> refused: my.test6.com/
>
> and here is error message in shell
> 14/07/16 11:33:15 WARN scheduler.TaskSetManager: Loss was due to
> java.lang.NoClassDefFoundError
> java.lang.NoClassDefFoundError: Could not initialize class $line10.$read$
>         at
> $line14.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19)
>         at
> $line14.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:19)
>         at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>         at
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>         at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>         at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>         at
> scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
>         at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>         at
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>         at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>         at
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>         at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
>         at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
>         at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717)
>         at
>
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1083)
>         at
>
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1083)
>         at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>         at org.apache.spark.scheduler.Task.run(Task.scala:51)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:722)
>
> I tested it on spark 1.0.0(same machine) and it was fine.
> It seems like Worker cannot find Executor akka endpoint.
> Do you have any ideas?
>
> Best regards
> Kevin
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-0-1-akka-connection-refused-tp9864.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Re: Spark 1.0.1 akka connection refused

Posted by Kevin Jung <it...@samsung.com>.
UPDATES:
It happens only when I use 'case class' and map RDD to this class in
spark-shell.
The other RDD transform, SchemaRDD with parquet file and any SparkSQL
operation work fine.
Is there some changes related to case class operation between 1.0.0 and
1.0.1?

Best regards 
Kevin



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-0-1-akka-connection-refused-tp9864p9875.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.