You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Akhil Das <ak...@sigmoidanalytics.com> on 2014/07/01 14:44:25 UTC

Re: Failed to launch Worker

Is this command working??

java -cp ::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/
assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar
-XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m
org.apache.spark.deploy.worker.Worker spark://x.x.x.174:7077

Thanks
Best Regards


On Tue, Jul 1, 2014 at 6:08 PM, MEETHU MATHEW <me...@yahoo.co.in>
wrote:

>
>  Hi ,
>
> I am using Spark Standalone mode with one master and 2 slaves.I am not
>  able to start the workers and connect it to the master using
>
> ./bin/spark-class org.apache.spark.deploy.worker.Worker
> spark://x.x.x.174:7077
>
> The log says
>
> Exception in thread "main" org.jboss.netty.channel.ChannelException:
> Failed to bind to: master/x.x.x.174:0
>  at
> org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
>  ...
> Caused by: java.net.BindException: Cannot assign requested address
>
> When I try to start the worker from the slaves using the following java
> command,its running without any exception
>
> java -cp
> ::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar
> -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m
> org.apache.spark.deploy.worker.Worker spark://:master:7077
>
>
>
>
> Thanks & Regards,
> Meethu M
>

Re: Failed to launch Worker

Posted by MEETHU MATHEW <me...@yahoo.co.in>.
I am running the ./bin/spark-class from the workers.

I have added my slaves in conf/slaves file.Both ./sbin/start-all.sh and ./sbin/start-slaves.sh are returning "Failed to launch Worker" exception with log in the first mail.
 
I am using standalone spark cluster with hadoop 1.2.1


Thanks & Regards, 
Meethu M


On Tuesday, 1 July 2014 11:52 PM, Aaron Davidson <il...@gmail.com> wrote:
 


Where are you running the spark-class version? Hopefully also on the workers.

If you're trying to centrally start/stop all workers, you can add a "slaves" file to the spark conf/ directory which is just a list of your hosts, one per line. Then you can just use "./sbin/start-slaves.sh" to start the worker on all of your machines.

Note that this is already setup correctly if you're using the spark-ec2 scripts.



On Tue, Jul 1, 2014 at 5:53 AM, MEETHU MATHEW <me...@yahoo.co.in> wrote:

Yes.
> 
>Thanks & Regards, 
>Meethu M
>
>
>
>On Tuesday, 1 July 2014 6:14 PM, Akhil Das <ak...@sigmoidanalytics.com> wrote:
> 
>
>
>Is this command working??
>
>
>java -cp ::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m org.apache.spark.deploy.worker.Worker spark://x.x.x.174:7077
>
>
>
>Thanks
>Best Regards
>
>
>On Tue, Jul 1, 2014 at 6:08 PM, MEETHU MATHEW <me...@yahoo.co.in> wrote:
>
>
>>
>> Hi ,
>>
>>
>>I am using Spark Standalone mode with one master and 2 slaves.I am not  able to start the workers and connect it to the master using 
>>
>>
>>./bin/spark-class org.apache.spark.deploy.worker.Worker spark://x.x.x.174:7077
>>
>>
>>The log says
>>
>>
>>Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to bind to: master/x.x.x.174:0
>>at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
>>...
>>Caused by: java.net.BindException: Cannot assign requested address
>>
>>
>>When I try to start the worker from the slaves using the following java command,its running without any exception
>>
>>
>>java -cp ::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m org.apache.spark.deploy.worker.Worker spark://:master:7077
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>Thanks & Regards, 
>>Meethu M
>
>
>

Re: Failed to launch Worker

Posted by Aaron Davidson <il...@gmail.com>.
Where are you running the spark-class version? Hopefully also on the
workers.

If you're trying to centrally start/stop all workers, you can add a
"slaves" file to the spark conf/ directory which is just a list of your
hosts, one per line. Then you can just use "./sbin/start-slaves.sh" to
start the worker on all of your machines.

Note that this is already setup correctly if you're using the spark-ec2
scripts.


On Tue, Jul 1, 2014 at 5:53 AM, MEETHU MATHEW <me...@yahoo.co.in>
wrote:

> Yes.
>
> Thanks & Regards,
> Meethu M
>
>
>   On Tuesday, 1 July 2014 6:14 PM, Akhil Das <ak...@sigmoidanalytics.com>
> wrote:
>
>
>  Is this command working??
>
> java -cp ::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/
> assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar
> -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m
> -Xmx512m org.apache.spark.deploy.worker.Worker spark://x.x.x.174:7077
>
> Thanks
> Best Regards
>
>
> On Tue, Jul 1, 2014 at 6:08 PM, MEETHU MATHEW <me...@yahoo.co.in>
> wrote:
>
>
>  Hi ,
>
> I am using Spark Standalone mode with one master and 2 slaves.I am not
>  able to start the workers and connect it to the master using
>
> ./bin/spark-class org.apache.spark.deploy.worker.Worker
> spark://x.x.x.174:7077
>
> The log says
>
>  Exception in thread "main" org.jboss.netty.channel.ChannelException:
> Failed to bind to: master/x.x.x.174:0
>  at
> org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
>  ...
>  Caused by: java.net.BindException: Cannot assign requested address
>
> When I try to start the worker from the slaves using the following java
> command,its running without any exception
>
> java -cp
> ::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar
> -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m
> org.apache.spark.deploy.worker.Worker spark://:master:7077
>
>
>
>
> Thanks & Regards,
> Meethu M
>
>
>
>
>

Re: Failed to launch Worker

Posted by MEETHU MATHEW <me...@yahoo.co.in>.
Yes.
 
Thanks & Regards, 
Meethu M


On Tuesday, 1 July 2014 6:14 PM, Akhil Das <ak...@sigmoidanalytics.com> wrote:
 


Is this command working??

java -cp ::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m org.apache.spark.deploy.worker.Worker spark://x.x.x.174:7077



Thanks
Best Regards


On Tue, Jul 1, 2014 at 6:08 PM, MEETHU MATHEW <me...@yahoo.co.in> wrote:


>
> Hi ,
>
>
>I am using Spark Standalone mode with one master and 2 slaves.I am not  able to start the workers and connect it to the master using 
>
>
>./bin/spark-class org.apache.spark.deploy.worker.Worker spark://x.x.x.174:7077
>
>
>The log says
>
>
>Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to bind to: master/x.x.x.174:0
>at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
>...
>Caused by: java.net.BindException: Cannot assign requested address
>
>
>When I try to start the worker from the slaves using the following java command,its running without any exception
>
>
>java -cp ::/usr/local/spark-1.0.0/conf:/usr/local/spark-1.0.0/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m org.apache.spark.deploy.worker.Worker spark://:master:7077
>
>
>
>
>
>
>
>
>
>Thanks & Regards, 
>Meethu M