You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Yatong Zhang <bl...@gmail.com> on 2014/09/17 16:43:08 UTC

ava.lang.OutOfMemoryError: unable to create new native thread

Hi there,

I am using leveled compaction strategy and have many sstable files. The
error was during the startup, so any idea about this?


> ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line
> 199) Exception in thread Thread[FlushWriter:4,5,main]
> java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:693)
>         at
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
>         at
> java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:724)
> ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java (line
> 199) Exception in thread Thread[FlushWriter:2,5,main]
> FSReadError in
> /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
>         at
> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
>         at
> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
>         at
> org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
>         at
> org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
>         at
> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
>         at
> org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
>         at
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>         at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:724)
> Caused by: java.io.IOException: Map failed
>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
>         at
> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
>         ... 10 more
> Caused by: java.lang.OutOfMemoryError: Map failed
>         at sun.nio.ch.FileChannelImpl.map0(Native Method)
>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
>         ... 11 more
>

Re: ava.lang.OutOfMemoryError: unable to create new native thread

Posted by Chris Lohfink <cl...@blackbirdit.com>.
Check out that the limits here are set correctly:
http://www.datastax.com/documentation/cassandra/2.0/cassandra/install/installRecommendSettings.html

particularly the:
 * mmap limit which is really what this looks like...
 * nproc limit which on some distros defaults to 1024 can be your issue (maximum number of threads).. i don't think this is it but maybe.

If you think you have it all correctly but still hitting limit verify that the process is picking them up:

cat /proc/`cat /var/run/cassandra/cassandra.pid`/limits

or

cat /proc/whateverCassandraPIDIs/limits

should look (something) like:

Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        unlimited            unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             unlimited            unlimited            processes 
Max open files            100000               100000               files     
Max locked memory         unlimited            unlimited            bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       16382                16382                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         20                   20                   
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us        

---
Chris Lohfink

On Sep 17, 2014, at 6:09 PM, Yatong Zhang <bl...@gmail.com> wrote:

> My sstable size is 192MB. I removed some data directories to reduce the data that need to load, and this time it worked, so I was sure this was because of the data was too large.
> I tried to tune the JVM parameters, like heap size or stack size, but didn't help. I finally got it resolved by add some options to '/etc/sysctl.conf':
> 
> # Controls the maximum number of PID
> kernel.pid_max = 9999999
> # Controls the maximum number of threads
> kernel.threads-max = 9999999
> # Controls the maximum number of virtual memory areas
> vm.max_map_count = 9999999
>  
> Hope this would be helpful to others, but any other advices are also welcome
> 
> On Wed, Sep 17, 2014 at 11:54 PM, Rahul Neelakantan <ra...@rahul.be> wrote:
> What is your sstable size set to for each of the sstables, using LCS? Are you at the default of 5 MB?
> 
> Rahul Neelakantan
> 
> On Sep 17, 2014, at 10:58 AM, Yatong Zhang <bl...@gmail.com> wrote:
> 
>> sorry, about 300k+
>> 
>> On Wed, Sep 17, 2014 at 10:56 PM, Yatong Zhang <bl...@gmail.com> wrote:
>> no, I am running 64 bit JVM。 But I have many sstable files, about 30k+
>> 
>> On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson <gr...@vast.com> wrote:
>> Are you running on a 32 bit JVM?
>> 
>> On Sep 17, 2014, at 9:43 AM, Yatong Zhang <bl...@gmail.com> wrote:
>> 
>>> Hi there,
>>> 
>>> I am using leveled compaction strategy and have many sstable files. The error was during the startup, so any idea about this?
>>>  
>>> ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line 199) Exception in thread Thread[FlushWriter:4,5,main]
>>> java.lang.OutOfMemoryError: unable to create new native thread
>>>         at java.lang.Thread.start0(Native Method)
>>>         at java.lang.Thread.start(Thread.java:693)
>>>         at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
>>>         at java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         at java.lang.Thread.run(Thread.java:724)
>>> ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java (line 199) Exception in thread Thread[FlushWriter:2,5,main]
>>> FSReadError in /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
>>>         at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
>>>         at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
>>>         at org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
>>>         at org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
>>>         at org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
>>>         at org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
>>>         at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>>>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         at java.lang.Thread.run(Thread.java:724)
>>> Caused by: java.io.IOException: Map failed
>>>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
>>>         at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
>>>         ... 10 more
>>> Caused by: java.lang.OutOfMemoryError: Map failed
>>>         at sun.nio.ch.FileChannelImpl.map0(Native Method)
>>>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
>>>         ... 11 more
>>> 
>> 
>> 
>> 
> 


Re: ava.lang.OutOfMemoryError: unable to create new native thread

Posted by Yatong Zhang <bl...@gmail.com>.
My sstable size is 192MB. I removed some data directories to reduce the
data that need to load, and this time it worked, so I was sure this was
because of the data was too large.
I tried to tune the JVM parameters, like heap size or stack size, but
didn't help. I finally got it resolved by add some options to
'/etc/sysctl.conf':

# Controls the maximum number of PID
> kernel.pid_max = 9999999
> # Controls the maximum number of threads
> kernel.threads-max = 9999999
> # Controls the maximum number of virtual memory areas
> vm.max_map_count = 9999999
>

Hope this would be helpful to others, but any other advices are also welcome

On Wed, Sep 17, 2014 at 11:54 PM, Rahul Neelakantan <ra...@rahul.be> wrote:

> What is your sstable size set to for each of the sstables, using LCS? Are
> you at the default of 5 MB?
>
> Rahul Neelakantan
>
> On Sep 17, 2014, at 10:58 AM, Yatong Zhang <bl...@gmail.com> wrote:
>
> sorry, about 300k+
>
> On Wed, Sep 17, 2014 at 10:56 PM, Yatong Zhang <bl...@gmail.com>
> wrote:
>
>> no, I am running 64 bit JVM。 But I have many sstable files, about 30k+
>>
>> On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson <gr...@vast.com>
>> wrote:
>>
>>> Are you running on a 32 bit JVM?
>>>
>>> On Sep 17, 2014, at 9:43 AM, Yatong Zhang <bl...@gmail.com> wrote:
>>>
>>> Hi there,
>>>
>>> I am using leveled compaction strategy and have many sstable files. The
>>> error was during the startup, so any idea about this?
>>>
>>>
>>>> ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java
>>>> (line 199) Exception in thread Thread[FlushWriter:4,5,main]
>>>> java.lang.OutOfMemoryError: unable to create new native thread
>>>>         at java.lang.Thread.start0(Native Method)
>>>>         at java.lang.Thread.start(Thread.java:693)
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>         at java.lang.Thread.run(Thread.java:724)
>>>> ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java
>>>> (line 199) Exception in thread Thread[FlushWriter:2,5,main]
>>>> FSReadError in
>>>> /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
>>>>         at
>>>> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
>>>>         at
>>>> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
>>>>         at
>>>> org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
>>>>         at
>>>> org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
>>>>         at
>>>> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
>>>>         at
>>>> org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
>>>>         at
>>>> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>>>>         at
>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>         at java.lang.Thread.run(Thread.java:724)
>>>> Caused by: java.io.IOException: Map failed
>>>>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
>>>>         at
>>>> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
>>>>         ... 10 more
>>>> Caused by: java.lang.OutOfMemoryError: Map failed
>>>>         at sun.nio.ch.FileChannelImpl.map0(Native Method)
>>>>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
>>>>         ... 11 more
>>>>
>>>
>>>
>>>
>>
>

Re: ava.lang.OutOfMemoryError: unable to create new native thread

Posted by Rahul Neelakantan <ra...@rahul.be>.
What is your sstable size set to for each of the sstables, using LCS? Are you at the default of 5 MB?

Rahul Neelakantan

> On Sep 17, 2014, at 10:58 AM, Yatong Zhang <bl...@gmail.com> wrote:
> 
> sorry, about 300k+
> 
>> On Wed, Sep 17, 2014 at 10:56 PM, Yatong Zhang <bl...@gmail.com> wrote:
>> no, I am running 64 bit JVM。 But I have many sstable files, about 30k+
>> 
>>> On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson <gr...@vast.com> wrote:
>>> Are you running on a 32 bit JVM?
>>> 
>>>> On Sep 17, 2014, at 9:43 AM, Yatong Zhang <bl...@gmail.com> wrote:
>>>> 
>>>> Hi there,
>>>> 
>>>> I am using leveled compaction strategy and have many sstable files. The error was during the startup, so any idea about this?
>>>>  
>>>>> ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line 199) Exception in thread Thread[FlushWriter:4,5,main]
>>>>> java.lang.OutOfMemoryError: unable to create new native thread
>>>>>         at java.lang.Thread.start0(Native Method)
>>>>>         at java.lang.Thread.start(Thread.java:693)
>>>>>         at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
>>>>>         at java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
>>>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
>>>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>         at java.lang.Thread.run(Thread.java:724)
>>>>> ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java (line 199) Exception in thread Thread[FlushWriter:2,5,main]
>>>>> FSReadError in /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
>>>>>         at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
>>>>>         at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
>>>>>         at org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
>>>>>         at org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
>>>>>         at org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
>>>>>         at org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
>>>>>         at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>>>>>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>>>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>         at java.lang.Thread.run(Thread.java:724)
>>>>> Caused by: java.io.IOException: Map failed
>>>>>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
>>>>>         at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
>>>>>         ... 10 more
>>>>> Caused by: java.lang.OutOfMemoryError: Map failed
>>>>>         at sun.nio.ch.FileChannelImpl.map0(Native Method)
>>>>>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
>>>>>         ... 11 more
> 

Re: ava.lang.OutOfMemoryError: unable to create new native thread

Posted by Yatong Zhang <bl...@gmail.com>.
sorry, about 300k+

On Wed, Sep 17, 2014 at 10:56 PM, Yatong Zhang <bl...@gmail.com> wrote:

> no, I am running 64 bit JVM。 But I have many sstable files, about 30k+
>
> On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson <gr...@vast.com>
> wrote:
>
>> Are you running on a 32 bit JVM?
>>
>> On Sep 17, 2014, at 9:43 AM, Yatong Zhang <bl...@gmail.com> wrote:
>>
>> Hi there,
>>
>> I am using leveled compaction strategy and have many sstable files. The
>> error was during the startup, so any idea about this?
>>
>>
>>> ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line
>>> 199) Exception in thread Thread[FlushWriter:4,5,main]
>>> java.lang.OutOfMemoryError: unable to create new native thread
>>>         at java.lang.Thread.start0(Native Method)
>>>         at java.lang.Thread.start(Thread.java:693)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         at java.lang.Thread.run(Thread.java:724)
>>> ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java (line
>>> 199) Exception in thread Thread[FlushWriter:2,5,main]
>>> FSReadError in
>>> /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
>>>         at
>>> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
>>>         at
>>> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
>>>         at
>>> org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
>>>         at
>>> org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
>>>         at
>>> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
>>>         at
>>> org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
>>>         at
>>> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>>>         at
>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>         at java.lang.Thread.run(Thread.java:724)
>>> Caused by: java.io.IOException: Map failed
>>>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
>>>         at
>>> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
>>>         ... 10 more
>>> Caused by: java.lang.OutOfMemoryError: Map failed
>>>         at sun.nio.ch.FileChannelImpl.map0(Native Method)
>>>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
>>>         ... 11 more
>>>
>>
>>
>>
>

Re: ava.lang.OutOfMemoryError: unable to create new native thread

Posted by Yatong Zhang <bl...@gmail.com>.
no, I am running 64 bit JVM。 But I have many sstable files, about 30k+

On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson <gr...@vast.com> wrote:

> Are you running on a 32 bit JVM?
>
> On Sep 17, 2014, at 9:43 AM, Yatong Zhang <bl...@gmail.com> wrote:
>
> Hi there,
>
> I am using leveled compaction strategy and have many sstable files. The
> error was during the startup, so any idea about this?
>
>
>> ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line
>> 199) Exception in thread Thread[FlushWriter:4,5,main]
>> java.lang.OutOfMemoryError: unable to create new native thread
>>         at java.lang.Thread.start0(Native Method)
>>         at java.lang.Thread.start(Thread.java:693)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:724)
>> ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java (line
>> 199) Exception in thread Thread[FlushWriter:2,5,main]
>> FSReadError in
>> /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
>>         at
>> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
>>         at
>> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
>>         at
>> org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
>>         at
>> org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
>>         at
>> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
>>         at
>> org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
>>         at
>> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>>         at
>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:724)
>> Caused by: java.io.IOException: Map failed
>>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
>>         at
>> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
>>         ... 10 more
>> Caused by: java.lang.OutOfMemoryError: Map failed
>>         at sun.nio.ch.FileChannelImpl.map0(Native Method)
>>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
>>         ... 11 more
>>
>
>
>

Re: ava.lang.OutOfMemoryError: unable to create new native thread

Posted by graham sanderson <gr...@vast.com>.
Are you running on a 32 bit JVM?

On Sep 17, 2014, at 9:43 AM, Yatong Zhang <bl...@gmail.com> wrote:

> Hi there,
> 
> I am using leveled compaction strategy and have many sstable files. The error was during the startup, so any idea about this?
>  
> ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line 199) Exception in thread Thread[FlushWriter:4,5,main]
> java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:693)
>         at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
>         at java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:724)
> ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java (line 199) Exception in thread Thread[FlushWriter:2,5,main]
> FSReadError in /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
>         at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
>         at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
>         at org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
>         at org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
>         at org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
>         at org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
>         at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>         at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:724)
> Caused by: java.io.IOException: Map failed
>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
>         at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
>         ... 10 more
> Caused by: java.lang.OutOfMemoryError: Map failed
>         at sun.nio.ch.FileChannelImpl.map0(Native Method)
>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
>         ... 11 more
> 


Re: ava.lang.OutOfMemoryError: unable to create new native thread

Posted by "J. Ryan Earl" <os...@jryanearl.us>.
What's the 'ulimit -a' output of the user cassandra runs as?  From this and
your previous OOM thread, is sounds like you skipped the requisite OS
configuration.

On Wed, Sep 17, 2014 at 9:43 AM, Yatong Zhang <bl...@gmail.com> wrote:

> Hi there,
>
> I am using leveled compaction strategy and have many sstable files. The
> error was during the startup, so any idea about this?
>
>
>> ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line
>> 199) Exception in thread Thread[FlushWriter:4,5,main]
>> java.lang.OutOfMemoryError: unable to create new native thread
>>         at java.lang.Thread.start0(Native Method)
>>         at java.lang.Thread.start(Thread.java:693)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:724)
>> ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java (line
>> 199) Exception in thread Thread[FlushWriter:2,5,main]
>> FSReadError in
>> /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
>>         at
>> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
>>         at
>> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
>>         at
>> org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
>>         at
>> org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
>>         at
>> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
>>         at
>> org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
>>         at
>> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>>         at
>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:724)
>> Caused by: java.io.IOException: Map failed
>>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
>>         at
>> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
>>         ... 10 more
>> Caused by: java.lang.OutOfMemoryError: Map failed
>>         at sun.nio.ch.FileChannelImpl.map0(Native Method)
>>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
>>         ... 11 more
>>
>
>