You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Archit Thakur <ar...@gmail.com> on 2014/01/02 12:31:49 UTC

Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

Hi,

I have some 5G of data. distributed in some 597 sequence files. My
application does a flatmap on the union of all rdd's created from
individual files. The flatmap statement throws java.lang.stackOverflowError
with the default stack size. I increased the stack size to 1g (both system
and jvm). Now, it has started printing "Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient memory" and is not moving forward. Just printing it in the
continuous loop. Any ideas? Or suggestions would help. Archit.

-Thx.

Re: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

Posted by Archit Thakur <ar...@gmail.com>.
Yes, it has already been set to 50g.


On Thu, Jan 2, 2014 at 7:05 PM, Eugen Cepoi <ce...@gmail.com> wrote:

> Did you try to define the spark.executor.memory property to the amount of
> memory you want per worker?
>
> For example spark.executor.memory=2g
>
> http://spark.incubator.apache.org/docs/latest/configuration.html
>
>
> 2014/1/2 Archit Thakur <ar...@gmail.com>
>
>> Need not mention Workers could be seen on the UI.
>>
>>
>> On Thu, Jan 2, 2014 at 5:01 PM, Archit Thakur <ar...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> I have some 5G of data. distributed in some 597 sequence files. My
>>> application does a flatmap on the union of all rdd's created from
>>> individual files. The flatmap statement throws java.lang.stackOverflowError
>>> with the default stack size. I increased the stack size to 1g (both system
>>> and jvm). Now, it has started printing "Initial job has not accepted any
>>> resources; check your cluster UI to ensure that workers are registered and
>>> have sufficient memory" and is not moving forward. Just printing it in the
>>> continuous loop. Any ideas? Or suggestions would help. Archit.
>>>
>>> -Thx.
>>>
>>
>>
>

Re: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

Posted by Eugen Cepoi <ce...@gmail.com>.
Did you try to define the spark.executor.memory property to the amount of
memory you want per worker?

For example spark.executor.memory=2g

http://spark.incubator.apache.org/docs/latest/configuration.html


2014/1/2 Archit Thakur <ar...@gmail.com>

> Need not mention Workers could be seen on the UI.
>
>
> On Thu, Jan 2, 2014 at 5:01 PM, Archit Thakur <ar...@gmail.com>wrote:
>
>> Hi,
>>
>> I have some 5G of data. distributed in some 597 sequence files. My
>> application does a flatmap on the union of all rdd's created from
>> individual files. The flatmap statement throws java.lang.stackOverflowError
>> with the default stack size. I increased the stack size to 1g (both system
>> and jvm). Now, it has started printing "Initial job has not accepted any
>> resources; check your cluster UI to ensure that workers are registered and
>> have sufficient memory" and is not moving forward. Just printing it in the
>> continuous loop. Any ideas? Or suggestions would help. Archit.
>>
>> -Thx.
>>
>
>

Re: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

Posted by Archit Thakur <ar...@gmail.com>.
Need not mention Workers could be seen on the UI.


On Thu, Jan 2, 2014 at 5:01 PM, Archit Thakur <ar...@gmail.com>wrote:

> Hi,
>
> I have some 5G of data. distributed in some 597 sequence files. My
> application does a flatmap on the union of all rdd's created from
> individual files. The flatmap statement throws java.lang.stackOverflowError
> with the default stack size. I increased the stack size to 1g (both system
> and jvm). Now, it has started printing "Initial job has not accepted any
> resources; check your cluster UI to ensure that workers are registered and
> have sufficient memory" and is not moving forward. Just printing it in the
> continuous loop. Any ideas? Or suggestions would help. Archit.
>
> -Thx.
>