You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@storm.apache.org by Ganesh Chandrasekaran <gc...@wayfair.com> on 2015/05/19 20:09:33 UTC
Java memory exception using multilang
Hi all,
I am seeing the following error in one of the supervisor nodes which is similar to the following question on stackoverflow - http://stackoverflow.com/questions/23008467/java-lang-outofmemoryerror-unable-to-create-new-native-thread-while-running-s
java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:691) at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:943) at java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1555) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:333) at java.util.concurrent.ScheduledThreadPoolExecutor.scheduleAtFixedRate(ScheduledThreadPoolExecutor.java:570) at java.util.concurrent.Executors$DelegatedScheduledExecutorService.scheduleAtFixedRate(Executors.java:695) at backtype.storm.task.ShellBolt.prepare(ShellBolt.java:128) at backtype.storm.daemon.executor$fn__4722$fn__4734.invoke(executor.clj:692) at backtype.storm.util$async_loop$fn__458.invoke(util.clj:461) at clojure.lang.AFn.run(AFn.java:24) at java.lang.Thread.run(Thread.java:722)
I checked the memory on my supervisor and it still has 40% of memory free. I am not sure if it is the same problem as the one discussed in the link above. Also, I am only seeing this on 1 supervisor and not on my other supervisor nodes. It was the same config we were having on our old cluster that was running 0.8.2. I also have multiple topologies running on the cluster but I doubt any application is causing memory leak. If it was shouldn't all memory be used up and not list anything as free?
Cluster summary -
1 Nimbus - 5 Supervisors (12 ports each)
drpc.childopts
-Xmx768m
nimbus.childopts
-Xmx1024m
supervisor.childopts
-Xmx256m
worker.childopts
-Xmx1024m
Thanks,
Ganesh
Re: Java memory exception using multilang
Posted by Jeffery Maass <ma...@gmail.com>.
This is not a storm issue.
https://plumbr.eu/outofmemoryerror/unable-to-create-new-native-thread
Thank you for your time!
+++++++++++++++++++++
Jeff Maass <ma...@gmail.com>
linkedin.com/in/jeffmaass
stackoverflow.com/users/373418/maassql
+++++++++++++++++++++
On Tue, May 19, 2015 at 1:51 PM, saiprasad mishra <saiprasadmishra@gmail.com
> wrote:
> It has nothing to do with the heap as the link suggested
>
> Do a ulimit -a and see whats the value for
>
> max user processes
>
>
> Try to increase this if you are on linux
>
>
> Also do not forget to increase the open files too
>
>
> May be set both of them to 65535 and that should take care of this issue
>
>
> Regards
>
> Sai
>
> On Tue, May 19, 2015 at 11:09 AM, Ganesh Chandrasekaran <
> gchandrasekaran@wayfair.com> wrote:
>
>> Hi all,
>>
>> I am seeing the following error in one of the supervisor nodes which is
>> similar to the following question on stackoverflow –
>> http://stackoverflow.com/questions/23008467/java-lang-outofmemoryerror-unable-to-create-new-native-thread-while-running-s
>>
>>
>>
>> java.lang.OutOfMemoryError: unable to create new native thread at
>> java.lang.Thread.start0(Native Method) at
>> java.lang.Thread.start(Thread.java:691) at
>> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:943)
>> at
>> java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1555)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:333)
>> at
>> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleAtFixedRate(ScheduledThreadPoolExecutor.java:570)
>> at
>> java.util.concurrent.Executors$DelegatedScheduledExecutorService.scheduleAtFixedRate(Executors.java:695)
>> at backtype.storm.task.ShellBolt.prepare(ShellBolt.java:128) at
>> backtype.storm.daemon.executor$fn__4722$fn__4734.invoke(executor.clj:692)
>> at backtype.storm.util$async_loop$fn__458.invoke(util.clj:461) at
>> clojure.lang.AFn.run(AFn.java:24) at java.lang.Thread.run(Thread.java:722)
>>
>>
>>
>> I checked the memory on my supervisor and it still has 40% of memory
>> free. I am not sure if it is the same problem as the one discussed in the
>> link above. Also, I am only seeing this on 1 supervisor and not on my other
>> supervisor nodes. It was the same config we were having on our old cluster
>> that was running 0.8.2. I also have multiple topologies running on the
>> cluster but I doubt any application is causing memory leak. If it was
>> shouldn’t all memory be used up and not list anything as free?
>>
>>
>>
>> Cluster summary –
>>
>> 1 Nimbus - 5 Supervisors (12 ports each)
>>
>>
>>
>> drpc.childopts
>>
>> -Xmx768m
>>
>>
>>
>>
>>
>>
>>
>> nimbus.childopts
>>
>> -Xmx1024m
>>
>>
>>
>> supervisor.childopts
>>
>> -Xmx256m
>>
>>
>>
>> worker.childopts
>>
>> -Xmx1024m
>>
>>
>>
>> Thanks,
>>
>> Ganesh
>>
>>
>>
>>
>>
>
>
Re: Java memory exception using multilang
Posted by saiprasad mishra <sa...@gmail.com>.
It has nothing to do with the heap as the link suggested
Do a ulimit -a and see whats the value for
max user processes
Try to increase this if you are on linux
Also do not forget to increase the open files too
May be set both of them to 65535 and that should take care of this issue
Regards
Sai
On Tue, May 19, 2015 at 11:09 AM, Ganesh Chandrasekaran <
gchandrasekaran@wayfair.com> wrote:
> Hi all,
>
> I am seeing the following error in one of the supervisor nodes which is
> similar to the following question on stackoverflow –
> http://stackoverflow.com/questions/23008467/java-lang-outofmemoryerror-unable-to-create-new-native-thread-while-running-s
>
>
>
> java.lang.OutOfMemoryError: unable to create new native thread at
> java.lang.Thread.start0(Native Method) at
> java.lang.Thread.start(Thread.java:691) at
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:943)
> at
> java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1555)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:333)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor.scheduleAtFixedRate(ScheduledThreadPoolExecutor.java:570)
> at
> java.util.concurrent.Executors$DelegatedScheduledExecutorService.scheduleAtFixedRate(Executors.java:695)
> at backtype.storm.task.ShellBolt.prepare(ShellBolt.java:128) at
> backtype.storm.daemon.executor$fn__4722$fn__4734.invoke(executor.clj:692)
> at backtype.storm.util$async_loop$fn__458.invoke(util.clj:461) at
> clojure.lang.AFn.run(AFn.java:24) at java.lang.Thread.run(Thread.java:722)
>
>
>
> I checked the memory on my supervisor and it still has 40% of memory free.
> I am not sure if it is the same problem as the one discussed in the link
> above. Also, I am only seeing this on 1 supervisor and not on my other
> supervisor nodes. It was the same config we were having on our old cluster
> that was running 0.8.2. I also have multiple topologies running on the
> cluster but I doubt any application is causing memory leak. If it was
> shouldn’t all memory be used up and not list anything as free?
>
>
>
> Cluster summary –
>
> 1 Nimbus - 5 Supervisors (12 ports each)
>
>
>
> drpc.childopts
>
> -Xmx768m
>
>
>
>
>
>
>
> nimbus.childopts
>
> -Xmx1024m
>
>
>
> supervisor.childopts
>
> -Xmx256m
>
>
>
> worker.childopts
>
> -Xmx1024m
>
>
>
> Thanks,
>
> Ganesh
>
>
>
>
>