You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Nathan Kronenfeld <nk...@oculusinfo.com> on 2014/02/22 06:36:46 UTC

Trying to connect to spark from within a web server

Can anyone help me here?

I've got a small spark cluster running on three machines - hadoop-s1,
hadoop-s2, and hadoop-s3 - with s1 acting master, and all three acting as
workers.  It works fine - I can connect with spark-shell, I can run jobs, I
can see the web ui.

The web UI says:
Spark Master at spark://hadoop-s1.oculus.local:7077
URL: spark://hadoop-s1.oculus.local:7077

I've connected to it fine using both a scala and a java SparkContext.

But when I try connecting from within a Tomcat service, I get the following
messages:
[INFO] 22 Feb 2014 00:27:38 - org.apache.spark.Logging$class - Connecting
to master spark://hadoop-s1.oculus.local:7077...
[INFO] 22 Feb 2014 00:27:58 - org.apache.spark.Logging$class - Connecting
to master spark://hadoop-s1.oculus.local:7077...
[ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - All masters
are unresponsive! Giving up.
[ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Spark
cluster looks dead, giving up.
[ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Exiting due
to error from cluster scheduler: Spark cluster looks down

When I look on the spark server logs, there isn't even a sign of an
attempted connection.

I'm trying to use a JavaSparkContext, and I've printed out the parameters I
pass in, and they work fine in a stand-alone program.

Anyone have a clue why this fails? Or even how to find out why this fals?


-- 
Nathan Kronenfeld
Senior Visualization Developer
Oculus Info Inc
2 Berkeley Street, Suite 600,
Toronto, Ontario M5A 4J5
Phone:  +1-416-203-3003 x 238
Email:  nkronenfeld@oculusinfo.com

Re: Trying to connect to spark from within a web server

Posted by Nathan Kronenfeld <nk...@oculusinfo.com>.
I do notice that scala 2.9.2 is being included because of net.liftweb.

Also, I don't know if I just missed it before or it wasn't doing this
before and my latest changes get it a little farther, but I'm now seeing
the following in the spark logs:

14/02/28 20:13:29 INFO actor.ActorSystemImpl: RemoteClientStarted
@akka://spark@hadoop-s1.oculus.local:35212
14/02/28 20:13:29 ERROR NettyRemoteTransport(null): dropping message
RegisterApplication(ApplicationDescription(Web Service Spark Instance)) for
non-local recipient akka://sparkMaster@192.168.0.46:7077/user/Master at
akka://sparkMaster@hadoop-s1.oculus.local:7077 local is
akka://sparkMaster@hadoop-s1.oculus.local:7077
14/02/28 20:13:49 ERROR NettyRemoteTransport(null): dropping message
RegisterApplication(ApplicationDescription(Web Service Spark Instance)) for
non-local recipient akka://sparkMaster@192.168.0.46:7077/user/Master at
akka://sparkMaster@hadoop-s1.oculus.local:7077 local is
akka://sparkMaster@hadoop-s1.oculus.local:7077
14/02/28 20:14:09 ERROR NettyRemoteTransport(null): dropping message
RegisterApplication(ApplicationDescription(Web Service Spark Instance)) for
non-local recipient akka://sparkMaster@192.168.0.46:7077/user/Master at
akka://sparkMaster@hadoop-s1.oculus.local:7077 local is
akka://sparkMaster@hadoop-s1.oculus.local:7077
14/02/28 20:14:32 INFO actor.ActorSystemImpl: RemoteClientShutdown
@akka://spark@hadoop-s1.oculus.local:35212



On Sat, Feb 22, 2014 at 1:58 PM, Soumya Simanta <so...@gmail.com>wrote:

> Mostly likely all your classes/jars that are required to connect to Spark
> and not being loaded or the incorrect versions are being loaded when you
> start to do this from inside the web container (Tomcat).
>
>
>
>
>
> On Sat, Feb 22, 2014 at 1:51 PM, Nathan Kronenfeld <
> nkronenfeld@oculusinfo.com> wrote:
>
>> yes, but only when I try to connect from a web service running in Tomcat.
>>
>> When I try to connect using a stand-alone program, using the same
>> parameters, it works fine.
>>
>>
>> On Sat, Feb 22, 2014 at 12:15 PM, Mayur Rustagi <ma...@gmail.com>wrote:
>>
>>> So Spark is running on that IP, web ui is loading on that IP showing
>>> workers & when you connect to that IP with javaAPI the cluster appears to
>>> be down to it?
>>>
>>> Mayur Rustagi
>>> Ph: +919632149971
>>> h <https://twitter.com/mayur_rustagi>ttp://www.sigmoidanalytics.com
>>> https://twitter.com/mayur_rustagi
>>>
>>>
>>>
>>> On Fri, Feb 21, 2014 at 10:22 PM, Nathan Kronenfeld <
>>> nkronenfeld@oculusinfo.com> wrote:
>>>
>>>> Netstat gives exactly the expected IP address (not a 127...., but a
>>>> 192...).
>>>> I tried it anyway, though... exactly the same results, but with a
>>>> number instead of a name.
>>>> Oh, and I forgot to mention last time, in case it makes a difference -
>>>> I'm running 0.8.1, not 0.9.0, at least for now
>>>>
>>>>
>>>>
>>>> On Sat, Feb 22, 2014 at 12:50 AM, Mayur Rustagi <
>>>> mayur.rustagi@gmail.com> wrote:
>>>>
>>>>> most likely the master is binding to a unique address and you are
>>>>> connecting to some other internal address. Master can bind to random
>>>>> internal address 127.0... or even your machine IP at that time.
>>>>> Easiest is to check
>>>>> netstat -an |grep 7077
>>>>> This will give you which IP to bind to exactly when launching spark
>>>>> context.
>>>>>
>>>>> Mayur Rustagi
>>>>> Ph: +919632149971
>>>>> h <https://twitter.com/mayur_rustagi>ttp://www.sigmoidanalytics.com
>>>>> https://twitter.com/mayur_rustagi
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Feb 21, 2014 at 9:36 PM, Nathan Kronenfeld <
>>>>> nkronenfeld@oculusinfo.com> wrote:
>>>>>
>>>>>> Can anyone help me here?
>>>>>>
>>>>>> I've got a small spark cluster running on three machines - hadoop-s1,
>>>>>> hadoop-s2, and hadoop-s3 - with s1 acting master, and all three acting as
>>>>>> workers.  It works fine - I can connect with spark-shell, I can run jobs, I
>>>>>> can see the web ui.
>>>>>>
>>>>>> The web UI says:
>>>>>> Spark Master at spark://hadoop-s1.oculus.local:7077
>>>>>> URL: spark://hadoop-s1.oculus.local:7077
>>>>>>
>>>>>> I've connected to it fine using both a scala and a java SparkContext.
>>>>>>
>>>>>> But when I try connecting from within a Tomcat service, I get the
>>>>>> following messages:
>>>>>> [INFO] 22 Feb 2014 00:27:38 - org.apache.spark.Logging$class -
>>>>>> Connecting to master spark://hadoop-s1.oculus.local:7077...
>>>>>> [INFO] 22 Feb 2014 00:27:58 - org.apache.spark.Logging$class -
>>>>>> Connecting to master spark://hadoop-s1.oculus.local:7077...
>>>>>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - All
>>>>>> masters are unresponsive! Giving up.
>>>>>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Spark
>>>>>> cluster looks dead, giving up.
>>>>>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class -
>>>>>> Exiting due to error from cluster scheduler: Spark cluster looks down
>>>>>>
>>>>>> When I look on the spark server logs, there isn't even a sign of an
>>>>>> attempted connection.
>>>>>>
>>>>>> I'm trying to use a JavaSparkContext, and I've printed out the
>>>>>> parameters I pass in, and they work fine in a stand-alone program.
>>>>>>
>>>>>> Anyone have a clue why this fails? Or even how to find out why this
>>>>>> fals?
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Nathan Kronenfeld
>>>>>> Senior Visualization Developer
>>>>>> Oculus Info Inc
>>>>>> 2 Berkeley Street, Suite 600,
>>>>>> Toronto, Ontario M5A 4J5
>>>>>> Phone:  +1-416-203-3003 x 238
>>>>>> Email:  nkronenfeld@oculusinfo.com
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Nathan Kronenfeld
>>>> Senior Visualization Developer
>>>> Oculus Info Inc
>>>> 2 Berkeley Street, Suite 600,
>>>> Toronto, Ontario M5A 4J5
>>>> Phone:  +1-416-203-3003 x 238
>>>> Email:  nkronenfeld@oculusinfo.com
>>>>
>>>
>>>
>>
>>
>> --
>> Nathan Kronenfeld
>> Senior Visualization Developer
>> Oculus Info Inc
>> 2 Berkeley Street, Suite 600,
>> Toronto, Ontario M5A 4J5
>> Phone:  +1-416-203-3003 x 238
>> Email:  nkronenfeld@oculusinfo.com
>>
>
>


-- 
Nathan Kronenfeld
Senior Visualization Developer
Oculus Info Inc
2 Berkeley Street, Suite 600,
Toronto, Ontario M5A 4J5
Phone:  +1-416-203-3003 x 238
Email:  nkronenfeld@oculusinfo.com

Re: Trying to connect to spark from within a web server

Posted by Soumya Simanta <so...@gmail.com>.
Mostly likely all your classes/jars that are required to connect to Spark
and not being loaded or the incorrect versions are being loaded when you
start to do this from inside the web container (Tomcat).





On Sat, Feb 22, 2014 at 1:51 PM, Nathan Kronenfeld <
nkronenfeld@oculusinfo.com> wrote:

> yes, but only when I try to connect from a web service running in Tomcat.
>
> When I try to connect using a stand-alone program, using the same
> parameters, it works fine.
>
>
> On Sat, Feb 22, 2014 at 12:15 PM, Mayur Rustagi <ma...@gmail.com>wrote:
>
>> So Spark is running on that IP, web ui is loading on that IP showing
>> workers & when you connect to that IP with javaAPI the cluster appears to
>> be down to it?
>>
>> Mayur Rustagi
>> Ph: +919632149971
>> h <https://twitter.com/mayur_rustagi>ttp://www.sigmoidanalytics.com
>> https://twitter.com/mayur_rustagi
>>
>>
>>
>> On Fri, Feb 21, 2014 at 10:22 PM, Nathan Kronenfeld <
>> nkronenfeld@oculusinfo.com> wrote:
>>
>>> Netstat gives exactly the expected IP address (not a 127...., but a
>>> 192...).
>>> I tried it anyway, though... exactly the same results, but with a number
>>> instead of a name.
>>> Oh, and I forgot to mention last time, in case it makes a difference -
>>> I'm running 0.8.1, not 0.9.0, at least for now
>>>
>>>
>>>
>>> On Sat, Feb 22, 2014 at 12:50 AM, Mayur Rustagi <mayur.rustagi@gmail.com
>>> > wrote:
>>>
>>>> most likely the master is binding to a unique address and you are
>>>> connecting to some other internal address. Master can bind to random
>>>> internal address 127.0... or even your machine IP at that time.
>>>> Easiest is to check
>>>> netstat -an |grep 7077
>>>> This will give you which IP to bind to exactly when launching spark
>>>> context.
>>>>
>>>> Mayur Rustagi
>>>> Ph: +919632149971
>>>> h <https://twitter.com/mayur_rustagi>ttp://www.sigmoidanalytics.com
>>>> https://twitter.com/mayur_rustagi
>>>>
>>>>
>>>>
>>>> On Fri, Feb 21, 2014 at 9:36 PM, Nathan Kronenfeld <
>>>> nkronenfeld@oculusinfo.com> wrote:
>>>>
>>>>> Can anyone help me here?
>>>>>
>>>>> I've got a small spark cluster running on three machines - hadoop-s1,
>>>>> hadoop-s2, and hadoop-s3 - with s1 acting master, and all three acting as
>>>>> workers.  It works fine - I can connect with spark-shell, I can run jobs, I
>>>>> can see the web ui.
>>>>>
>>>>> The web UI says:
>>>>> Spark Master at spark://hadoop-s1.oculus.local:7077
>>>>> URL: spark://hadoop-s1.oculus.local:7077
>>>>>
>>>>> I've connected to it fine using both a scala and a java SparkContext.
>>>>>
>>>>> But when I try connecting from within a Tomcat service, I get the
>>>>> following messages:
>>>>> [INFO] 22 Feb 2014 00:27:38 - org.apache.spark.Logging$class -
>>>>> Connecting to master spark://hadoop-s1.oculus.local:7077...
>>>>> [INFO] 22 Feb 2014 00:27:58 - org.apache.spark.Logging$class -
>>>>> Connecting to master spark://hadoop-s1.oculus.local:7077...
>>>>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - All
>>>>> masters are unresponsive! Giving up.
>>>>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Spark
>>>>> cluster looks dead, giving up.
>>>>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class -
>>>>> Exiting due to error from cluster scheduler: Spark cluster looks down
>>>>>
>>>>> When I look on the spark server logs, there isn't even a sign of an
>>>>> attempted connection.
>>>>>
>>>>> I'm trying to use a JavaSparkContext, and I've printed out the
>>>>> parameters I pass in, and they work fine in a stand-alone program.
>>>>>
>>>>> Anyone have a clue why this fails? Or even how to find out why this
>>>>> fals?
>>>>>
>>>>>
>>>>> --
>>>>> Nathan Kronenfeld
>>>>> Senior Visualization Developer
>>>>> Oculus Info Inc
>>>>> 2 Berkeley Street, Suite 600,
>>>>> Toronto, Ontario M5A 4J5
>>>>> Phone:  +1-416-203-3003 x 238
>>>>> Email:  nkronenfeld@oculusinfo.com
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Nathan Kronenfeld
>>> Senior Visualization Developer
>>> Oculus Info Inc
>>> 2 Berkeley Street, Suite 600,
>>> Toronto, Ontario M5A 4J5
>>> Phone:  +1-416-203-3003 x 238
>>> Email:  nkronenfeld@oculusinfo.com
>>>
>>
>>
>
>
> --
> Nathan Kronenfeld
> Senior Visualization Developer
> Oculus Info Inc
> 2 Berkeley Street, Suite 600,
> Toronto, Ontario M5A 4J5
> Phone:  +1-416-203-3003 x 238
> Email:  nkronenfeld@oculusinfo.com
>

Re: Trying to connect to spark from within a web server

Posted by Nathan Kronenfeld <nk...@oculusinfo.com>.
yes, but only when I try to connect from a web service running in Tomcat.

When I try to connect using a stand-alone program, using the same
parameters, it works fine.


On Sat, Feb 22, 2014 at 12:15 PM, Mayur Rustagi <ma...@gmail.com>wrote:

> So Spark is running on that IP, web ui is loading on that IP showing
> workers & when you connect to that IP with javaAPI the cluster appears to
> be down to it?
>
> Mayur Rustagi
> Ph: +919632149971
> h <https://twitter.com/mayur_rustagi>ttp://www.sigmoidanalytics.com
> https://twitter.com/mayur_rustagi
>
>
>
> On Fri, Feb 21, 2014 at 10:22 PM, Nathan Kronenfeld <
> nkronenfeld@oculusinfo.com> wrote:
>
>> Netstat gives exactly the expected IP address (not a 127...., but a
>> 192...).
>> I tried it anyway, though... exactly the same results, but with a number
>> instead of a name.
>> Oh, and I forgot to mention last time, in case it makes a difference -
>> I'm running 0.8.1, not 0.9.0, at least for now
>>
>>
>>
>> On Sat, Feb 22, 2014 at 12:50 AM, Mayur Rustagi <ma...@gmail.com>wrote:
>>
>>> most likely the master is binding to a unique address and you are
>>> connecting to some other internal address. Master can bind to random
>>> internal address 127.0... or even your machine IP at that time.
>>> Easiest is to check
>>> netstat -an |grep 7077
>>> This will give you which IP to bind to exactly when launching spark
>>> context.
>>>
>>> Mayur Rustagi
>>> Ph: +919632149971
>>> h <https://twitter.com/mayur_rustagi>ttp://www.sigmoidanalytics.com
>>> https://twitter.com/mayur_rustagi
>>>
>>>
>>>
>>> On Fri, Feb 21, 2014 at 9:36 PM, Nathan Kronenfeld <
>>> nkronenfeld@oculusinfo.com> wrote:
>>>
>>>> Can anyone help me here?
>>>>
>>>> I've got a small spark cluster running on three machines - hadoop-s1,
>>>> hadoop-s2, and hadoop-s3 - with s1 acting master, and all three acting as
>>>> workers.  It works fine - I can connect with spark-shell, I can run jobs, I
>>>> can see the web ui.
>>>>
>>>> The web UI says:
>>>> Spark Master at spark://hadoop-s1.oculus.local:7077
>>>> URL: spark://hadoop-s1.oculus.local:7077
>>>>
>>>> I've connected to it fine using both a scala and a java SparkContext.
>>>>
>>>> But when I try connecting from within a Tomcat service, I get the
>>>> following messages:
>>>> [INFO] 22 Feb 2014 00:27:38 - org.apache.spark.Logging$class -
>>>> Connecting to master spark://hadoop-s1.oculus.local:7077...
>>>> [INFO] 22 Feb 2014 00:27:58 - org.apache.spark.Logging$class -
>>>> Connecting to master spark://hadoop-s1.oculus.local:7077...
>>>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - All
>>>> masters are unresponsive! Giving up.
>>>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Spark
>>>> cluster looks dead, giving up.
>>>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Exiting
>>>> due to error from cluster scheduler: Spark cluster looks down
>>>>
>>>> When I look on the spark server logs, there isn't even a sign of an
>>>> attempted connection.
>>>>
>>>> I'm trying to use a JavaSparkContext, and I've printed out the
>>>> parameters I pass in, and they work fine in a stand-alone program.
>>>>
>>>> Anyone have a clue why this fails? Or even how to find out why this
>>>> fals?
>>>>
>>>>
>>>> --
>>>> Nathan Kronenfeld
>>>> Senior Visualization Developer
>>>> Oculus Info Inc
>>>> 2 Berkeley Street, Suite 600,
>>>> Toronto, Ontario M5A 4J5
>>>> Phone:  +1-416-203-3003 x 238
>>>> Email:  nkronenfeld@oculusinfo.com
>>>>
>>>
>>>
>>
>>
>> --
>> Nathan Kronenfeld
>> Senior Visualization Developer
>> Oculus Info Inc
>> 2 Berkeley Street, Suite 600,
>> Toronto, Ontario M5A 4J5
>> Phone:  +1-416-203-3003 x 238
>> Email:  nkronenfeld@oculusinfo.com
>>
>
>


-- 
Nathan Kronenfeld
Senior Visualization Developer
Oculus Info Inc
2 Berkeley Street, Suite 600,
Toronto, Ontario M5A 4J5
Phone:  +1-416-203-3003 x 238
Email:  nkronenfeld@oculusinfo.com

Re: Trying to connect to spark from within a web server

Posted by Mayur Rustagi <ma...@gmail.com>.
So Spark is running on that IP, web ui is loading on that IP showing
workers & when you connect to that IP with javaAPI the cluster appears to
be down to it?

Mayur Rustagi
Ph: +919632149971
h <https://twitter.com/mayur_rustagi>ttp://www.sigmoidanalytics.com
https://twitter.com/mayur_rustagi



On Fri, Feb 21, 2014 at 10:22 PM, Nathan Kronenfeld <
nkronenfeld@oculusinfo.com> wrote:

> Netstat gives exactly the expected IP address (not a 127...., but a
> 192...).
> I tried it anyway, though... exactly the same results, but with a number
> instead of a name.
> Oh, and I forgot to mention last time, in case it makes a difference - I'm
> running 0.8.1, not 0.9.0, at least for now
>
>
>
> On Sat, Feb 22, 2014 at 12:50 AM, Mayur Rustagi <ma...@gmail.com>wrote:
>
>> most likely the master is binding to a unique address and you are
>> connecting to some other internal address. Master can bind to random
>> internal address 127.0... or even your machine IP at that time.
>> Easiest is to check
>> netstat -an |grep 7077
>> This will give you which IP to bind to exactly when launching spark
>> context.
>>
>> Mayur Rustagi
>> Ph: +919632149971
>> h <https://twitter.com/mayur_rustagi>ttp://www.sigmoidanalytics.com
>> https://twitter.com/mayur_rustagi
>>
>>
>>
>> On Fri, Feb 21, 2014 at 9:36 PM, Nathan Kronenfeld <
>> nkronenfeld@oculusinfo.com> wrote:
>>
>>> Can anyone help me here?
>>>
>>> I've got a small spark cluster running on three machines - hadoop-s1,
>>> hadoop-s2, and hadoop-s3 - with s1 acting master, and all three acting as
>>> workers.  It works fine - I can connect with spark-shell, I can run jobs, I
>>> can see the web ui.
>>>
>>> The web UI says:
>>> Spark Master at spark://hadoop-s1.oculus.local:7077
>>> URL: spark://hadoop-s1.oculus.local:7077
>>>
>>> I've connected to it fine using both a scala and a java SparkContext.
>>>
>>> But when I try connecting from within a Tomcat service, I get the
>>> following messages:
>>> [INFO] 22 Feb 2014 00:27:38 - org.apache.spark.Logging$class -
>>> Connecting to master spark://hadoop-s1.oculus.local:7077...
>>> [INFO] 22 Feb 2014 00:27:58 - org.apache.spark.Logging$class -
>>> Connecting to master spark://hadoop-s1.oculus.local:7077...
>>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - All
>>> masters are unresponsive! Giving up.
>>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Spark
>>> cluster looks dead, giving up.
>>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Exiting
>>> due to error from cluster scheduler: Spark cluster looks down
>>>
>>> When I look on the spark server logs, there isn't even a sign of an
>>> attempted connection.
>>>
>>> I'm trying to use a JavaSparkContext, and I've printed out the
>>> parameters I pass in, and they work fine in a stand-alone program.
>>>
>>> Anyone have a clue why this fails? Or even how to find out why this fals?
>>>
>>>
>>> --
>>> Nathan Kronenfeld
>>> Senior Visualization Developer
>>> Oculus Info Inc
>>> 2 Berkeley Street, Suite 600,
>>> Toronto, Ontario M5A 4J5
>>> Phone:  +1-416-203-3003 x 238
>>> Email:  nkronenfeld@oculusinfo.com
>>>
>>
>>
>
>
> --
> Nathan Kronenfeld
> Senior Visualization Developer
> Oculus Info Inc
> 2 Berkeley Street, Suite 600,
> Toronto, Ontario M5A 4J5
> Phone:  +1-416-203-3003 x 238
> Email:  nkronenfeld@oculusinfo.com
>

Re: Trying to connect to spark from within a web server

Posted by Nathan Kronenfeld <nk...@oculusinfo.com>.
Netstat gives exactly the expected IP address (not a 127...., but a 192...).
I tried it anyway, though... exactly the same results, but with a number
instead of a name.
Oh, and I forgot to mention last time, in case it makes a difference - I'm
running 0.8.1, not 0.9.0, at least for now



On Sat, Feb 22, 2014 at 12:50 AM, Mayur Rustagi <ma...@gmail.com>wrote:

> most likely the master is binding to a unique address and you are
> connecting to some other internal address. Master can bind to random
> internal address 127.0... or even your machine IP at that time.
> Easiest is to check
> netstat -an |grep 7077
> This will give you which IP to bind to exactly when launching spark
> context.
>
> Mayur Rustagi
> Ph: +919632149971
> h <https://twitter.com/mayur_rustagi>ttp://www.sigmoidanalytics.com
> https://twitter.com/mayur_rustagi
>
>
>
> On Fri, Feb 21, 2014 at 9:36 PM, Nathan Kronenfeld <
> nkronenfeld@oculusinfo.com> wrote:
>
>> Can anyone help me here?
>>
>> I've got a small spark cluster running on three machines - hadoop-s1,
>> hadoop-s2, and hadoop-s3 - with s1 acting master, and all three acting as
>> workers.  It works fine - I can connect with spark-shell, I can run jobs, I
>> can see the web ui.
>>
>> The web UI says:
>> Spark Master at spark://hadoop-s1.oculus.local:7077
>> URL: spark://hadoop-s1.oculus.local:7077
>>
>> I've connected to it fine using both a scala and a java SparkContext.
>>
>> But when I try connecting from within a Tomcat service, I get the
>> following messages:
>> [INFO] 22 Feb 2014 00:27:38 - org.apache.spark.Logging$class - Connecting
>> to master spark://hadoop-s1.oculus.local:7077...
>> [INFO] 22 Feb 2014 00:27:58 - org.apache.spark.Logging$class - Connecting
>> to master spark://hadoop-s1.oculus.local:7077...
>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - All
>> masters are unresponsive! Giving up.
>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Spark
>> cluster looks dead, giving up.
>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Exiting
>> due to error from cluster scheduler: Spark cluster looks down
>>
>> When I look on the spark server logs, there isn't even a sign of an
>> attempted connection.
>>
>> I'm trying to use a JavaSparkContext, and I've printed out the parameters
>> I pass in, and they work fine in a stand-alone program.
>>
>> Anyone have a clue why this fails? Or even how to find out why this fals?
>>
>>
>> --
>> Nathan Kronenfeld
>> Senior Visualization Developer
>> Oculus Info Inc
>> 2 Berkeley Street, Suite 600,
>> Toronto, Ontario M5A 4J5
>> Phone:  +1-416-203-3003 x 238
>> Email:  nkronenfeld@oculusinfo.com
>>
>
>


-- 
Nathan Kronenfeld
Senior Visualization Developer
Oculus Info Inc
2 Berkeley Street, Suite 600,
Toronto, Ontario M5A 4J5
Phone:  +1-416-203-3003 x 238
Email:  nkronenfeld@oculusinfo.com

Re: Trying to connect to spark from within a web server

Posted by Mayur Rustagi <ma...@gmail.com>.
most likely the master is binding to a unique address and you are
connecting to some other internal address. Master can bind to random
internal address 127.0... or even your machine IP at that time.
Easiest is to check
netstat -an |grep 7077
This will give you which IP to bind to exactly when launching spark
context.

Mayur Rustagi
Ph: +919632149971
h <https://twitter.com/mayur_rustagi>ttp://www.sigmoidanalytics.com
https://twitter.com/mayur_rustagi



On Fri, Feb 21, 2014 at 9:36 PM, Nathan Kronenfeld <
nkronenfeld@oculusinfo.com> wrote:

> Can anyone help me here?
>
> I've got a small spark cluster running on three machines - hadoop-s1,
> hadoop-s2, and hadoop-s3 - with s1 acting master, and all three acting as
> workers.  It works fine - I can connect with spark-shell, I can run jobs, I
> can see the web ui.
>
> The web UI says:
> Spark Master at spark://hadoop-s1.oculus.local:7077
> URL: spark://hadoop-s1.oculus.local:7077
>
> I've connected to it fine using both a scala and a java SparkContext.
>
> But when I try connecting from within a Tomcat service, I get the
> following messages:
> [INFO] 22 Feb 2014 00:27:38 - org.apache.spark.Logging$class - Connecting
> to master spark://hadoop-s1.oculus.local:7077...
> [INFO] 22 Feb 2014 00:27:58 - org.apache.spark.Logging$class - Connecting
> to master spark://hadoop-s1.oculus.local:7077...
> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - All
> masters are unresponsive! Giving up.
> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Spark
> cluster looks dead, giving up.
> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Exiting
> due to error from cluster scheduler: Spark cluster looks down
>
> When I look on the spark server logs, there isn't even a sign of an
> attempted connection.
>
> I'm trying to use a JavaSparkContext, and I've printed out the parameters
> I pass in, and they work fine in a stand-alone program.
>
> Anyone have a clue why this fails? Or even how to find out why this fals?
>
>
> --
> Nathan Kronenfeld
> Senior Visualization Developer
> Oculus Info Inc
> 2 Berkeley Street, Suite 600,
> Toronto, Ontario M5A 4J5
> Phone:  +1-416-203-3003 x 238
> Email:  nkronenfeld@oculusinfo.com
>