You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Ken Sandney <bl...@gmail.com> on 2010/04/20 03:22:16 UTC

0.6.1 insert 1B rows, crashed when using py_stress

Hi
I am doing a insert test with 9 nodes, the command:

> stress.py -n 1000000000 -t 1000 -c 10 -o insert -i 5 -d
> 10.0.0.1,10.0.0.2.....

and  5 of the 9 nodes were cashed, only about 6'500'000 rows were inserted
I checked out the system.log and seems the reason are 'out of memory'. I
don't if this had something to do with my settings.
Any idea about this?
Thank you, and the following are the errors from system.log


> ERROR [CACHETABLE-TIMER-1] 2010-04-19 20:43:14,013 CassandraDaemon.java
> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]

java.lang.OutOfMemoryError: Java heap space

        at
> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)

        at java.util.TimerThread.mainLoop(Timer.java:512)

        at java.util.TimerThread.run(Timer.java:462)

ERROR [ROW-MUTATION-STAGE:9] 2010-04-19 20:43:27,932 CassandraDaemon.java
> (line 78) Fatal exception in thread Thread[ROW-MUTATION-STAGE:9,5,main]

java.lang.OutOfMemoryError: Java heap space

        at
> java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:893)

        at
> java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(ConcurrentSkipListMap.java:1893)

        at
> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:192)

        at
> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:118)

        at
> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:108)

        at
> org.apache.cassandra.db.RowMutationSerializer.defreezeTheMaps(RowMutation.java:359)

        at
> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:369)

        at
> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:322)

        at
> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:45)

        at
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:40)

        at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

        at java.lang.Thread.run(Thread.java:619)


and another

 INFO [GC inspection] 2010-04-19 21:13:09,034 GCInspector.java (line 110) GC
> for ConcurrentMarkSweep: 2016 ms, 1239096 reclaimed leaving 1094238944 used;
> max is 1211826176

ERROR [Thread-14] 2010-04-19 21:23:18,508 CassandraDaemon.java (line 78)
> Fatal exception in thread Thread[Thread-14,5,main]

java.lang.OutOfMemoryError: Java heap space

        at sun.nio.ch.Util.releaseTemporaryDirectBuffer(Util.java:67)

        at sun.nio.ch.IOUtil.read(IOUtil.java:212)

        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)

        at
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:176)

        at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)

        at java.io.InputStream.read(InputStream.java:85)

        at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:64)

        at java.io.DataInputStream.readInt(DataInputStream.java:370)

        at
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:70)

ERROR [COMPACTION-POOL:1] 2010-04-19 21:23:18,514
> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask

java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
> heap space

        at
> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)

        at java.util.concurrent.FutureTask.get(FutureTask.java:83)

        at
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)

        at
> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)

        at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)

        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

        at java.lang.Thread.run(Thread.java:619)

Caused by: java.lang.OutOfMemoryError: Java heap space

 INFO [FLUSH-WRITER-POOL:1] 2010-04-19 21:23:25,600 Memtable.java (line 162)
> Completed flushing /m/cassandra/data/Keyspace1/Standard1-623-Data.db

ERROR [Thread-13] 2010-04-19 21:23:18,514 CassandraDaemon.java (line 78)
> Fatal exception in thread Thread[Thread-13,5,main]

java.lang.OutOfMemoryError: Java heap space

ERROR [Thread-15] 2010-04-19 21:23:18,514 CassandraDaemon.java (line 78)
> Fatal exception in thread Thread[Thread-15,5,main]

java.lang.OutOfMemoryError: Java heap space

ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:23:18,514 CassandraDaemon.java
> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]

java.lang.OutOfMemoryError: Java heap space


and

 INFO 21:00:31,319 GC for ConcurrentMarkSweep: 1417 ms, 206216 reclaimed
> leaving 1094527752 used; max is 1211826176

java.lang.OutOfMemoryError: Java heap space

Dumping heap to java_pid28670.hprof ...

 INFO 21:01:23,882 GC for ConcurrentMarkSweep: 2100 ms, 734008 reclaimed
> leaving 1093996648 used; max is 1211826176

Heap dump file created [1095841554 bytes in 12.960 secs]

 INFO 21:01:45,082 GC for ConcurrentMarkSweep: 2089 ms, 769968 reclaimed
> leaving 1093960776 used; max is 1211826176

ERROR 21:01:49,559 Fatal exception in thread Thread[Hint delivery,5,main]

java.lang.OutOfMemoryError: Java heap space

        at java.util.Arrays.copyOf(Arrays.java:2772)

        at java.util.Arrays.copyOf(Arrays.java:2746)

        at java.util.ArrayList.ensureCapacity(ArrayList.java:187)

        at java.util.ArrayList.add(ArrayList.java:378)

        at
> java.util.concurrent.ConcurrentSkipListMap.toList(ConcurrentSkipListMap.java:2341)

        at
> java.util.concurrent.ConcurrentSkipListMap$Values.toArray(ConcurrentSkipListMap.java:2445)

        at
> org.apache.cassandra.db.Memtable.getSliceIterator(Memtable.java:207)

        at
> org.apache.cassandra.db.filter.SliceQueryFilter.getMemColumnIterator(SliceQueryFilter.java:58)

        at
> org.apache.cassandra.db.filter.QueryFilter.getMemColumnIterator(QueryFilter.java:53)

        at
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:816)

        at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:750)

        at
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:719)

        at
> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:175)

        at
> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)

        at
> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)

        at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)

        at java.lang.Thread.run(Thread.java:636)

 INFO 21:01:56,123 GC for ConcurrentMarkSweep: 2115 ms, 893240 reclaimed
> leaving 1093862712 used; max is 1211826176


and

ERROR [Hint delivery] 2010-04-19 21:57:07,089 CassandraDaemon.java (line 78)
> Fatal exception in thread Thread[Hint delivery,5,main]

java.lang.RuntimeException: java.lang.RuntimeException:
> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
> heap space

        at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)

        at java.lang.Thread.run(Thread.java:636)

Caused by: java.lang.RuntimeException:
> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
> heap space

        at
> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:209)

        at
> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)

        at
> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)

        at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)

        ... 1 more

Caused by: java.util.concurrent.ExecutionException:
> java.lang.OutOfMemoryError: Java heap space

        at
> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)

        at java.util.concurrent.FutureTask.get(FutureTask.java:111)

        at
> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:205)

        ... 4 more

Caused by: java.lang.OutOfMemoryError: Java heap space

ERROR [MESSAGE-DESERIALIZER-POOL:1] 2010-04-19 21:57:07,089
> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask

java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
> heap space

        at
> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)

        at java.util.concurrent.FutureTask.get(FutureTask.java:111)

        at
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)

        at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)

        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)

        at java.lang.Thread.run(Thread.java:636)

Caused by: java.lang.OutOfMemoryError: Java heap space

ERROR [COMPACTION-POOL:1] 2010-04-19 21:57:07,089
> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask

java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
> heap space

        at
> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)

        at java.util.concurrent.FutureTask.get(FutureTask.java:111)

        at
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)

        at
> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)

        at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)

        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)

        at java.lang.Thread.run(Thread.java:636)

Caused by: java.lang.OutOfMemoryError: Java heap space

ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:56:29,572 CassandraDaemon.java
> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]

java.lang.OutOfMemoryError: Java heap space

        at java.util.HashMap.<init>(HashMap.java:226)        at
> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>        at java.util.TimerThread.mainLoop(Timer.java:534)        at
> java.util.TimerThread.run(Timer.java:484)

Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Jonathan Ellis <jb...@gmail.com>.
http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts

On Mon, Apr 19, 2010 at 8:22 PM, Ken Sandney <bl...@gmail.com> wrote:
> Hi
> I am doing a insert test with 9 nodes, the command:
>>
>> stress.py -n 1000000000 -t 1000 -c 10 -o insert -i 5 -d
>> 10.0.0.1,10.0.0.2.....
>
> and  5 of the 9 nodes were cashed, only about 6'500'000 rows were inserted
> I checked out the system.log and seems the reason are 'out of memory'. I
> don't if this had something to do with my settings.
> Any idea about this?
> Thank you, and the following are the errors from system.log
>
>>
>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 20:43:14,013 CassandraDaemon.java
>> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>>         at
>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>>
>>         at java.util.TimerThread.mainLoop(Timer.java:512)
>>
>>         at java.util.TimerThread.run(Timer.java:462)
>>
>> ERROR [ROW-MUTATION-STAGE:9] 2010-04-19 20:43:27,932 CassandraDaemon.java
>> (line 78) Fatal exception in thread Thread[ROW-MUTATION-STAGE:9,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>>         at
>> java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:893)
>>
>>         at
>> java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(ConcurrentSkipListMap.java:1893)
>>
>>         at
>> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:192)
>>
>>         at
>> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:118)
>>
>>         at
>> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:108)
>>
>>         at
>> org.apache.cassandra.db.RowMutationSerializer.defreezeTheMaps(RowMutation.java:359)
>>
>>         at
>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:369)
>>
>>         at
>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:322)
>>
>>         at
>> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:45)
>>
>>         at
>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:40)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>
>>         at java.lang.Thread.run(Thread.java:619)
>
> and another
>>
>>  INFO [GC inspection] 2010-04-19 21:13:09,034 GCInspector.java (line 110)
>> GC for ConcurrentMarkSweep: 2016 ms, 1239096 reclaimed leaving 1094238944
>> used; max is 1211826176
>>
>> ERROR [Thread-14] 2010-04-19 21:23:18,508 CassandraDaemon.java (line 78)
>> Fatal exception in thread Thread[Thread-14,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>>         at sun.nio.ch.Util.releaseTemporaryDirectBuffer(Util.java:67)
>>
>>         at sun.nio.ch.IOUtil.read(IOUtil.java:212)
>>
>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
>>
>>         at
>> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:176)
>>
>>         at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
>>
>>         at java.io.InputStream.read(InputStream.java:85)
>>
>>         at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:64)
>>
>>         at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>
>>         at
>> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:70)
>>
>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:23:18,514
>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>>
>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>> heap space
>>
>>         at
>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>>
>>         at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>
>>         at
>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>>
>>         at
>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>
>>         at java.lang.Thread.run(Thread.java:619)
>>
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>
>>  INFO [FLUSH-WRITER-POOL:1] 2010-04-19 21:23:25,600 Memtable.java (line
>> 162) Completed flushing /m/cassandra/data/Keyspace1/Standard1-623-Data.db
>>
>> ERROR [Thread-13] 2010-04-19 21:23:18,514 CassandraDaemon.java (line 78)
>> Fatal exception in thread Thread[Thread-13,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>> ERROR [Thread-15] 2010-04-19 21:23:18,514 CassandraDaemon.java (line 78)
>> Fatal exception in thread Thread[Thread-15,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:23:18,514 CassandraDaemon.java
>> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>
>
> and
>>
>>  INFO 21:00:31,319 GC for ConcurrentMarkSweep: 1417 ms, 206216 reclaimed
>> leaving 1094527752 used; max is 1211826176
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>> Dumping heap to java_pid28670.hprof ...
>>
>>  INFO 21:01:23,882 GC for ConcurrentMarkSweep: 2100 ms, 734008 reclaimed
>> leaving 1093996648 used; max is 1211826176
>>
>> Heap dump file created [1095841554 bytes in 12.960 secs]
>>
>>  INFO 21:01:45,082 GC for ConcurrentMarkSweep: 2089 ms, 769968 reclaimed
>> leaving 1093960776 used; max is 1211826176
>>
>> ERROR 21:01:49,559 Fatal exception in thread Thread[Hint delivery,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>>         at java.util.Arrays.copyOf(Arrays.java:2772)
>>
>>         at java.util.Arrays.copyOf(Arrays.java:2746)
>>
>>         at java.util.ArrayList.ensureCapacity(ArrayList.java:187)
>>
>>         at java.util.ArrayList.add(ArrayList.java:378)
>>
>>         at
>> java.util.concurrent.ConcurrentSkipListMap.toList(ConcurrentSkipListMap.java:2341)
>>
>>         at
>> java.util.concurrent.ConcurrentSkipListMap$Values.toArray(ConcurrentSkipListMap.java:2445)
>>
>>         at
>> org.apache.cassandra.db.Memtable.getSliceIterator(Memtable.java:207)
>>
>>         at
>> org.apache.cassandra.db.filter.SliceQueryFilter.getMemColumnIterator(SliceQueryFilter.java:58)
>>
>>         at
>> org.apache.cassandra.db.filter.QueryFilter.getMemColumnIterator(QueryFilter.java:53)
>>
>>         at
>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:816)
>>
>>         at
>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:750)
>>
>>         at
>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:719)
>>
>>         at
>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:175)
>>
>>         at
>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>>
>>         at
>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>>
>>         at
>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>>
>>         at java.lang.Thread.run(Thread.java:636)
>>
>>  INFO 21:01:56,123 GC for ConcurrentMarkSweep: 2115 ms, 893240 reclaimed
>> leaving 1093862712 used; max is 1211826176
>
> and
>>
>> ERROR [Hint delivery] 2010-04-19 21:57:07,089 CassandraDaemon.java (line
>> 78) Fatal exception in thread Thread[Hint delivery,5,main]
>>
>> java.lang.RuntimeException: java.lang.RuntimeException:
>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>> heap space
>>
>>         at
>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>>
>>         at java.lang.Thread.run(Thread.java:636)
>>
>> Caused by: java.lang.RuntimeException:
>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>> heap space
>>
>>         at
>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:209)
>>
>>         at
>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>>
>>         at
>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>>
>>         at
>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>>
>>         ... 1 more
>>
>> Caused by: java.util.concurrent.ExecutionException:
>> java.lang.OutOfMemoryError: Java heap space
>>
>>         at
>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>
>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>
>>         at
>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:205)
>>
>>         ... 4 more
>>
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>
>> ERROR [MESSAGE-DESERIALIZER-POOL:1] 2010-04-19 21:57:07,089
>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>>
>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>> heap space
>>
>>         at
>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>
>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>
>>         at
>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>
>>         at java.lang.Thread.run(Thread.java:636)
>>
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>
>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:57:07,089
>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>>
>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>> heap space
>>
>>         at
>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>
>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>
>>         at
>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>>
>>         at
>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>
>>         at java.lang.Thread.run(Thread.java:636)
>>
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>
>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:56:29,572 CassandraDaemon.java
>> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>>         at java.util.HashMap.<init>(HashMap.java:226)        at
>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>>        at java.util.TimerThread.mainLoop(Timer.java:534)        at
>> java.util.TimerThread.run(Timer.java:484)
>
>

Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Benjamin Black <b...@b3k.us>.
Not so reasonable, given what you are trying to accomplish.  A 1GB
heap (on a 2GB machine) is fine for development and functional
testing, but I wouldn't try to deal with the number of rows you are
describing with less than 8GB/node with 4-6GB heap.


b

On Mon, Apr 19, 2010 at 7:32 PM, Ken Sandney <bl...@gmail.com> wrote:
> I am just running Cassandra on normal boxes, and grants 1GB of total 2GB to
> Cassandra is reasonable I think. Can this problem be resolved by tuning the
> thresholds described on this page , or just be waiting for the 0.7 release
> as Brandon mentioned?
>
> On Tue, Apr 20, 2010 at 10:15 AM, Jonathan Ellis <jb...@gmail.com> wrote:
>>
>> Schubert, I don't know if you saw this in the other thread referencing
>> your slides:
>>
>> It looks like the slowdown doesn't hit until after several GCs,
>> although it's hard to tell since the scale is different on the GC
>> graph and the insert throughput ones.
>>
>> Perhaps this is compaction kicking in, not GCs?  Definitely the extra
>> I/O + CPU load from compaction will cause a drop in throughput.
>>
>> On Mon, Apr 19, 2010 at 9:06 PM, Schubert Zhang <zs...@gmail.com> wrote:
>> > -Xmx1G is too small.
>> > In my cluster, 8GB ram on each node, and I grant 6GB to cassandra.
>> >
>> > Please see my test @
>> > http://www.slideshare.net/schubertzhang/presentations
>> >
>> > –Memory, GC..., always to be the bottleneck and big issue of java-based
>> > infrastructure software!
>> >
>> > References:
>> > –http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts
>> > –https://issues.apache.org/jira/browse/CASSANDRA-896
>> > (LinkedBlockingQueue
>> > issue, fixed in jdk-6u19)
>> >
>> > In fact, always when I using java-based infrastructure software, such as
>> > Cassandra, Hadoop, HBase, etc, I am also pained about such memory/GC
>> > issue
>> > finally.
>> >
>> > Then, we should provide higher harware with more RAM (such as
>> > 32GB~64GB),
>> > more CPU cores (such as 8~16). And we still cannot control the
>> > Out-Of-Memory-Error.
>> >
>> > I am thinking, maybe it is not right to leave the job of memory control
>> > to
>> > JVM.
>> >
>> > I have a long experience in telecom and embedded software in past ten
>> > years,
>> > where need robust programs and small RAM. I want to discuss following
>> > ideas
>> > with the community:
>> >
>> > 1. Manage the memory by ourselves: allocate objects/resource (memory) at
>> > initiating phase, and assign instances at runtime.
>> > 2. Reject the request when be short of resource, instead of throws OOME
>> > and
>> > exit (crash).
>> >
>> > 3. I know, it is not easy in java program.
>> >
>> > Schubert
>> >
>> > On Tue, Apr 20, 2010 at 9:40 AM, Ken Sandney <bl...@gmail.com>
>> > wrote:
>> >>
>> >> here is my JVM options, by default, I didn't modify them, from
>> >> cassandra.in.sh
>> >>>
>> >>> # Arguments to pass to the JVM
>> >>>
>> >>> JVM_OPTS=" \
>> >>>
>> >>>         -ea \
>> >>>
>> >>>         -Xms128M \
>> >>>
>> >>>         -Xmx1G \
>> >>>
>> >>>         -XX:TargetSurvivorRatio=90 \
>> >>>
>> >>>         -XX:+AggressiveOpts \
>> >>>
>> >>>         -XX:+UseParNewGC \
>> >>>
>> >>>         -XX:+UseConcMarkSweepGC \
>> >>>
>> >>>         -XX:+CMSParallelRemarkEnabled \
>> >>>
>> >>>         -XX:+HeapDumpOnOutOfMemoryError \
>> >>>
>> >>>         -XX:SurvivorRatio=128 \
>> >>>
>> >>>         -XX:MaxTenuringThreshold=0 \
>> >>>
>> >>>         -Dcom.sun.management.jmxremote.port=8080 \
>> >>>
>> >>>         -Dcom.sun.management.jmxremote.ssl=false \
>> >>>
>> >>>         -Dcom.sun.management.jmxremote.authenticate=false"
>> >>
>> >> and my box is normal pc with 2GB ram, Intel E3200  @ 2.40GHz. By the
>> >> way,
>> >> I am using the latest Sun JDK
>> >> On Tue, Apr 20, 2010 at 9:33 AM, Schubert Zhang <zs...@gmail.com>
>> >> wrote:
>> >>>
>> >>> Seems you should configure larger jvm-heap.
>> >>>
>> >>> On Tue, Apr 20, 2010 at 9:32 AM, Schubert Zhang <zs...@gmail.com>
>> >>> wrote:
>> >>>>
>> >>>> Please also post your jvm-heap and GC options, i.e. the seting in
>> >>>> cassandra.in.sh
>> >>>> And what about you node hardware?
>> >>>>
>> >>>> On Tue, Apr 20, 2010 at 9:22 AM, Ken Sandney <bl...@gmail.com>
>> >>>> wrote:
>> >>>>>
>> >>>>> Hi
>> >>>>> I am doing a insert test with 9 nodes, the command:
>> >>>>>>
>> >>>>>> stress.py -n 1000000000 -t 1000 -c 10 -o insert -i 5 -d
>> >>>>>> 10.0.0.1,10.0.0.2.....
>> >>>>>
>> >>>>> and  5 of the 9 nodes were cashed, only about 6'500'000 rows were
>> >>>>> inserted
>> >>>>> I checked out the system.log and seems the reason are 'out of
>> >>>>> memory'.
>> >>>>> I don't if this had something to do with my settings.
>> >>>>> Any idea about this?
>> >>>>> Thank you, and the following are the errors from system.log
>> >>>>>
>> >>>>>>
>> >>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 20:43:14,013
>> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>> >>>>>> Thread[CACHETABLE-TIMER-1,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>> >>>>>>
>> >>>>>>         at java.util.TimerThread.mainLoop(Timer.java:512)
>> >>>>>>
>> >>>>>>         at java.util.TimerThread.run(Timer.java:462)
>> >>>>>>
>> >>>>>> ERROR [ROW-MUTATION-STAGE:9] 2010-04-19 20:43:27,932
>> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>> >>>>>> Thread[ROW-MUTATION-STAGE:9,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:893)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(ConcurrentSkipListMap.java:1893)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:192)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:118)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:108)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.RowMutationSerializer.defreezeTheMaps(RowMutation.java:359)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:369)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:322)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:45)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:40)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >>>>>>
>> >>>>>>         at java.lang.Thread.run(Thread.java:619)
>> >>>>>
>> >>>>> and another
>> >>>>>>
>> >>>>>>  INFO [GC inspection] 2010-04-19 21:13:09,034 GCInspector.java
>> >>>>>> (line
>> >>>>>> 110) GC for ConcurrentMarkSweep: 2016 ms, 1239096 reclaimed leaving
>> >>>>>> 1094238944 used; max is 1211826176
>> >>>>>>
>> >>>>>> ERROR [Thread-14] 2010-04-19 21:23:18,508 CassandraDaemon.java
>> >>>>>> (line
>> >>>>>> 78) Fatal exception in thread Thread[Thread-14,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>> sun.nio.ch.Util.releaseTemporaryDirectBuffer(Util.java:67)
>> >>>>>>
>> >>>>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:212)
>> >>>>>>
>> >>>>>>         at
>> >>>>>> sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:176)
>> >>>>>>
>> >>>>>>         at
>> >>>>>> sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
>> >>>>>>
>> >>>>>>         at java.io.InputStream.read(InputStream.java:85)
>> >>>>>>
>> >>>>>>         at
>> >>>>>> sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:64)
>> >>>>>>
>> >>>>>>         at
>> >>>>>> java.io.DataInputStream.readInt(DataInputStream.java:370)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:70)
>> >>>>>>
>> >>>>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:23:18,514
>> >>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor
>> >>>>>> futuretask
>> >>>>>>
>> >>>>>> java.util.concurrent.ExecutionException:
>> >>>>>> java.lang.OutOfMemoryError:
>> >>>>>> Java heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>> >>>>>>
>> >>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >>>>>>
>> >>>>>>         at java.lang.Thread.run(Thread.java:619)
>> >>>>>>
>> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>>  INFO [FLUSH-WRITER-POOL:1] 2010-04-19 21:23:25,600 Memtable.java
>> >>>>>> (line 162) Completed flushing
>> >>>>>> /m/cassandra/data/Keyspace1/Standard1-623-Data.db
>> >>>>>>
>> >>>>>> ERROR [Thread-13] 2010-04-19 21:23:18,514 CassandraDaemon.java
>> >>>>>> (line
>> >>>>>> 78) Fatal exception in thread Thread[Thread-13,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>> ERROR [Thread-15] 2010-04-19 21:23:18,514 CassandraDaemon.java
>> >>>>>> (line
>> >>>>>> 78) Fatal exception in thread Thread[Thread-15,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:23:18,514
>> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>> >>>>>> Thread[CACHETABLE-TIMER-1,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>
>> >>>>>
>> >>>>> and
>> >>>>>>
>> >>>>>>  INFO 21:00:31,319 GC for ConcurrentMarkSweep: 1417 ms, 206216
>> >>>>>> reclaimed leaving 1094527752 used; max is 1211826176
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>> Dumping heap to java_pid28670.hprof ...
>> >>>>>>
>> >>>>>>  INFO 21:01:23,882 GC for ConcurrentMarkSweep: 2100 ms, 734008
>> >>>>>> reclaimed leaving 1093996648 used; max is 1211826176
>> >>>>>>
>> >>>>>> Heap dump file created [1095841554 bytes in 12.960 secs]
>> >>>>>>
>> >>>>>>  INFO 21:01:45,082 GC for ConcurrentMarkSweep: 2089 ms, 769968
>> >>>>>> reclaimed leaving 1093960776 used; max is 1211826176
>> >>>>>>
>> >>>>>> ERROR 21:01:49,559 Fatal exception in thread Thread[Hint
>> >>>>>> delivery,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>>         at java.util.Arrays.copyOf(Arrays.java:2772)
>> >>>>>>
>> >>>>>>         at java.util.Arrays.copyOf(Arrays.java:2746)
>> >>>>>>
>> >>>>>>         at java.util.ArrayList.ensureCapacity(ArrayList.java:187)
>> >>>>>>
>> >>>>>>         at java.util.ArrayList.add(ArrayList.java:378)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ConcurrentSkipListMap.toList(ConcurrentSkipListMap.java:2341)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ConcurrentSkipListMap$Values.toArray(ConcurrentSkipListMap.java:2445)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.Memtable.getSliceIterator(Memtable.java:207)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.filter.SliceQueryFilter.getMemColumnIterator(SliceQueryFilter.java:58)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.filter.QueryFilter.getMemColumnIterator(QueryFilter.java:53)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:816)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:750)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:719)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:175)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>> >>>>>>
>> >>>>>>         at java.lang.Thread.run(Thread.java:636)
>> >>>>>>
>> >>>>>>  INFO 21:01:56,123 GC for ConcurrentMarkSweep: 2115 ms, 893240
>> >>>>>> reclaimed leaving 1093862712 used; max is 1211826176
>> >>>>>
>> >>>>> and
>> >>>>>>
>> >>>>>> ERROR [Hint delivery] 2010-04-19 21:57:07,089 CassandraDaemon.java
>> >>>>>> (line 78) Fatal exception in thread Thread[Hint delivery,5,main]
>> >>>>>>
>> >>>>>> java.lang.RuntimeException: java.lang.RuntimeException:
>> >>>>>> java.util.concurrent.ExecutionException:
>> >>>>>> java.lang.OutOfMemoryError: Java
>> >>>>>> heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>> >>>>>>
>> >>>>>>         at java.lang.Thread.run(Thread.java:636)
>> >>>>>>
>> >>>>>> Caused by: java.lang.RuntimeException:
>> >>>>>> java.util.concurrent.ExecutionException:
>> >>>>>> java.lang.OutOfMemoryError: Java
>> >>>>>> heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:209)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>> >>>>>>
>> >>>>>>         ... 1 more
>> >>>>>>
>> >>>>>> Caused by: java.util.concurrent.ExecutionException:
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>> >>>>>>
>> >>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:205)
>> >>>>>>
>> >>>>>>         ... 4 more
>> >>>>>>
>> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>> ERROR [MESSAGE-DESERIALIZER-POOL:1] 2010-04-19 21:57:07,089
>> >>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor
>> >>>>>> futuretask
>> >>>>>>
>> >>>>>> java.util.concurrent.ExecutionException:
>> >>>>>> java.lang.OutOfMemoryError:
>> >>>>>> Java heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>> >>>>>>
>> >>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>> >>>>>>
>> >>>>>>         at java.lang.Thread.run(Thread.java:636)
>> >>>>>>
>> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:57:07,089
>> >>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor
>> >>>>>> futuretask
>> >>>>>>
>> >>>>>> java.util.concurrent.ExecutionException:
>> >>>>>> java.lang.OutOfMemoryError:
>> >>>>>> Java heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>> >>>>>>
>> >>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>> >>>>>>
>> >>>>>>         at java.lang.Thread.run(Thread.java:636)
>> >>>>>>
>> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:56:29,572
>> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>> >>>>>> Thread[CACHETABLE-TIMER-1,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>>         at java.util.HashMap.<init>(HashMap.java:226)        at
>> >>>>>>
>> >>>>>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>> >>>>>>        at java.util.TimerThread.mainLoop(Timer.java:534)        at
>> >>>>>> java.util.TimerThread.run(Timer.java:484)
>> >>>>>
>> >>>>>
>> >>>
>> >>
>> >
>> >
>
>

Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Schubert Zhang <zs...@gmail.com>.
Jonathan, Thanks.
Yes, the scale of GC grath is different from the throughput one.
I will do more check and tuning in our next test immediately.


On Tue, Apr 20, 2010 at 10:39 AM, Ken Sandney <bl...@gmail.com> wrote:

> Sorry I just don't know how to resolve this :)
>
>
> On Tue, Apr 20, 2010 at 10:37 AM, Jonathan Ellis <jb...@gmail.com>wrote:
>
>> Ken, I linked you to the FAQ answering your problem in the first reply
>> you got.  Please don't hijack my replies to other people; that's rude.
>>
>> On Mon, Apr 19, 2010 at 9:32 PM, Ken Sandney <bl...@gmail.com> wrote:
>> > I am just running Cassandra on normal boxes, and grants 1GB of total 2GB
>> to
>> > Cassandra is reasonable I think. Can this problem be resolved by tuning
>> the
>> > thresholds described on this page , or just be waiting for the 0.7
>> release
>> > as Brandon mentioned?
>> >
>> > On Tue, Apr 20, 2010 at 10:15 AM, Jonathan Ellis <jb...@gmail.com>
>> wrote:
>> >>
>> >> Schubert, I don't know if you saw this in the other thread referencing
>> >> your slides:
>> >>
>> >> It looks like the slowdown doesn't hit until after several GCs,
>> >> although it's hard to tell since the scale is different on the GC
>> >> graph and the insert throughput ones.
>> >>
>> >> Perhaps this is compaction kicking in, not GCs?  Definitely the extra
>> >> I/O + CPU load from compaction will cause a drop in throughput.
>> >>
>> >> On Mon, Apr 19, 2010 at 9:06 PM, Schubert Zhang <zs...@gmail.com>
>> wrote:
>> >> > -Xmx1G is too small.
>> >> > In my cluster, 8GB ram on each node, and I grant 6GB to cassandra.
>> >> >
>> >> > Please see my test @
>> >> > http://www.slideshare.net/schubertzhang/presentations
>> >> >
>> >> > –Memory, GC..., always to be the bottleneck and big issue of
>> java-based
>> >> > infrastructure software!
>> >> >
>> >> > References:
>> >> > –http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts
>> >> > –https://issues.apache.org/jira/browse/CASSANDRA-896
>> >> > (LinkedBlockingQueue
>> >> > issue, fixed in jdk-6u19)
>> >> >
>> >> > In fact, always when I using java-based infrastructure software, such
>> as
>> >> > Cassandra, Hadoop, HBase, etc, I am also pained about such memory/GC
>> >> > issue
>> >> > finally.
>> >> >
>> >> > Then, we should provide higher harware with more RAM (such as
>> >> > 32GB~64GB),
>> >> > more CPU cores (such as 8~16). And we still cannot control the
>> >> > Out-Of-Memory-Error.
>> >> >
>> >> > I am thinking, maybe it is not right to leave the job of memory
>> control
>> >> > to
>> >> > JVM.
>> >> >
>> >> > I have a long experience in telecom and embedded software in past ten
>> >> > years,
>> >> > where need robust programs and small RAM. I want to discuss following
>> >> > ideas
>> >> > with the community:
>> >> >
>> >> > 1. Manage the memory by ourselves: allocate objects/resource (memory)
>> at
>> >> > initiating phase, and assign instances at runtime.
>> >> > 2. Reject the request when be short of resource, instead of throws
>> OOME
>> >> > and
>> >> > exit (crash).
>> >> >
>> >> > 3. I know, it is not easy in java program.
>> >> >
>> >> > Schubert
>> >> >
>> >> > On Tue, Apr 20, 2010 at 9:40 AM, Ken Sandney <bl...@gmail.com>
>> >> > wrote:
>> >> >>
>> >> >> here is my JVM options, by default, I didn't modify them, from
>> >> >> cassandra.in.sh
>> >> >>>
>> >> >>> # Arguments to pass to the JVM
>> >> >>>
>> >> >>> JVM_OPTS=" \
>> >> >>>
>> >> >>>         -ea \
>> >> >>>
>> >> >>>         -Xms128M \
>> >> >>>
>> >> >>>         -Xmx1G \
>> >> >>>
>> >> >>>         -XX:TargetSurvivorRatio=90 \
>> >> >>>
>> >> >>>         -XX:+AggressiveOpts \
>> >> >>>
>> >> >>>         -XX:+UseParNewGC \
>> >> >>>
>> >> >>>         -XX:+UseConcMarkSweepGC \
>> >> >>>
>> >> >>>         -XX:+CMSParallelRemarkEnabled \
>> >> >>>
>> >> >>>         -XX:+HeapDumpOnOutOfMemoryError \
>> >> >>>
>> >> >>>         -XX:SurvivorRatio=128 \
>> >> >>>
>> >> >>>         -XX:MaxTenuringThreshold=0 \
>> >> >>>
>> >> >>>         -Dcom.sun.management.jmxremote.port=8080 \
>> >> >>>
>> >> >>>         -Dcom.sun.management.jmxremote.ssl=false \
>> >> >>>
>> >> >>>         -Dcom.sun.management.jmxremote.authenticate=false"
>> >> >>
>> >> >> and my box is normal pc with 2GB ram, Intel E3200  @ 2.40GHz. By the
>> >> >> way,
>> >> >> I am using the latest Sun JDK
>> >> >> On Tue, Apr 20, 2010 at 9:33 AM, Schubert Zhang <zs...@gmail.com>
>> >> >> wrote:
>> >> >>>
>> >> >>> Seems you should configure larger jvm-heap.
>> >> >>>
>> >> >>> On Tue, Apr 20, 2010 at 9:32 AM, Schubert Zhang <zsongbo@gmail.com
>> >
>> >> >>> wrote:
>> >> >>>>
>> >> >>>> Please also post your jvm-heap and GC options, i.e. the seting in
>> >> >>>> cassandra.in.sh
>> >> >>>> And what about you node hardware?
>> >> >>>>
>> >> >>>> On Tue, Apr 20, 2010 at 9:22 AM, Ken Sandney <blueflycn@gmail.com
>> >
>> >> >>>> wrote:
>> >> >>>>>
>> >> >>>>> Hi
>> >> >>>>> I am doing a insert test with 9 nodes, the command:
>> >> >>>>>>
>> >> >>>>>> stress.py -n 1000000000 -t 1000 -c 10 -o insert -i 5 -d
>> >> >>>>>> 10.0.0.1,10.0.0.2.....
>> >> >>>>>
>> >> >>>>> and  5 of the 9 nodes were cashed, only about 6'500'000 rows were
>> >> >>>>> inserted
>> >> >>>>> I checked out the system.log and seems the reason are 'out of
>> >> >>>>> memory'.
>> >> >>>>> I don't if this had something to do with my settings.
>> >> >>>>> Any idea about this?
>> >> >>>>> Thank you, and the following are the errors from system.log
>> >> >>>>>
>> >> >>>>>>
>> >> >>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 20:43:14,013
>> >> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>> >> >>>>>> Thread[CACHETABLE-TIMER-1,5,main]
>> >> >>>>>>
>> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>> >> >>>>>>
>> >> >>>>>>         at java.util.TimerThread.mainLoop(Timer.java:512)
>> >> >>>>>>
>> >> >>>>>>         at java.util.TimerThread.run(Timer.java:462)
>> >> >>>>>>
>> >> >>>>>> ERROR [ROW-MUTATION-STAGE:9] 2010-04-19 20:43:27,932
>> >> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>> >> >>>>>> Thread[ROW-MUTATION-STAGE:9,5,main]
>> >> >>>>>>
>> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:893)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(ConcurrentSkipListMap.java:1893)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:192)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:118)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:108)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.RowMutationSerializer.defreezeTheMaps(RowMutation.java:359)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:369)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:322)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:45)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:40)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >> >>>>>>
>> >> >>>>>>         at java.lang.Thread.run(Thread.java:619)
>> >> >>>>>
>> >> >>>>> and another
>> >> >>>>>>
>> >> >>>>>>  INFO [GC inspection] 2010-04-19 21:13:09,034 GCInspector.java
>> >> >>>>>> (line
>> >> >>>>>> 110) GC for ConcurrentMarkSweep: 2016 ms, 1239096 reclaimed
>> leaving
>> >> >>>>>> 1094238944 used; max is 1211826176
>> >> >>>>>>
>> >> >>>>>> ERROR [Thread-14] 2010-04-19 21:23:18,508 CassandraDaemon.java
>> >> >>>>>> (line
>> >> >>>>>> 78) Fatal exception in thread Thread[Thread-14,5,main]
>> >> >>>>>>
>> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>> sun.nio.ch.Util.releaseTemporaryDirectBuffer(Util.java:67)
>> >> >>>>>>
>> >> >>>>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:212)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>> sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:176)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>> sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
>> >> >>>>>>
>> >> >>>>>>         at java.io.InputStream.read(InputStream.java:85)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>> sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:64)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>> java.io.DataInputStream.readInt(DataInputStream.java:370)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:70)
>> >> >>>>>>
>> >> >>>>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:23:18,514
>> >> >>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor
>> >> >>>>>> futuretask
>> >> >>>>>>
>> >> >>>>>> java.util.concurrent.ExecutionException:
>> >> >>>>>> java.lang.OutOfMemoryError:
>> >> >>>>>> Java heap space
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>> >> >>>>>>
>> >> >>>>>>         at
>> java.util.concurrent.FutureTask.get(FutureTask.java:83)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >> >>>>>>
>> >> >>>>>>         at java.lang.Thread.run(Thread.java:619)
>> >> >>>>>>
>> >> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>> >> >>>>>>
>> >> >>>>>>  INFO [FLUSH-WRITER-POOL:1] 2010-04-19 21:23:25,600
>> Memtable.java
>> >> >>>>>> (line 162) Completed flushing
>> >> >>>>>> /m/cassandra/data/Keyspace1/Standard1-623-Data.db
>> >> >>>>>>
>> >> >>>>>> ERROR [Thread-13] 2010-04-19 21:23:18,514 CassandraDaemon.java
>> >> >>>>>> (line
>> >> >>>>>> 78) Fatal exception in thread Thread[Thread-13,5,main]
>> >> >>>>>>
>> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >> >>>>>>
>> >> >>>>>> ERROR [Thread-15] 2010-04-19 21:23:18,514 CassandraDaemon.java
>> >> >>>>>> (line
>> >> >>>>>> 78) Fatal exception in thread Thread[Thread-15,5,main]
>> >> >>>>>>
>> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >> >>>>>>
>> >> >>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:23:18,514
>> >> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>> >> >>>>>> Thread[CACHETABLE-TIMER-1,5,main]
>> >> >>>>>>
>> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> and
>> >> >>>>>>
>> >> >>>>>>  INFO 21:00:31,319 GC for ConcurrentMarkSweep: 1417 ms, 206216
>> >> >>>>>> reclaimed leaving 1094527752 used; max is 1211826176
>> >> >>>>>>
>> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >> >>>>>>
>> >> >>>>>> Dumping heap to java_pid28670.hprof ...
>> >> >>>>>>
>> >> >>>>>>  INFO 21:01:23,882 GC for ConcurrentMarkSweep: 2100 ms, 734008
>> >> >>>>>> reclaimed leaving 1093996648 used; max is 1211826176
>> >> >>>>>>
>> >> >>>>>> Heap dump file created [1095841554 bytes in 12.960 secs]
>> >> >>>>>>
>> >> >>>>>>  INFO 21:01:45,082 GC for ConcurrentMarkSweep: 2089 ms, 769968
>> >> >>>>>> reclaimed leaving 1093960776 used; max is 1211826176
>> >> >>>>>>
>> >> >>>>>> ERROR 21:01:49,559 Fatal exception in thread Thread[Hint
>> >> >>>>>> delivery,5,main]
>> >> >>>>>>
>> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >> >>>>>>
>> >> >>>>>>         at java.util.Arrays.copyOf(Arrays.java:2772)
>> >> >>>>>>
>> >> >>>>>>         at java.util.Arrays.copyOf(Arrays.java:2746)
>> >> >>>>>>
>> >> >>>>>>         at
>> java.util.ArrayList.ensureCapacity(ArrayList.java:187)
>> >> >>>>>>
>> >> >>>>>>         at java.util.ArrayList.add(ArrayList.java:378)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> java.util.concurrent.ConcurrentSkipListMap.toList(ConcurrentSkipListMap.java:2341)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> java.util.concurrent.ConcurrentSkipListMap$Values.toArray(ConcurrentSkipListMap.java:2445)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.Memtable.getSliceIterator(Memtable.java:207)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.filter.SliceQueryFilter.getMemColumnIterator(SliceQueryFilter.java:58)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.filter.QueryFilter.getMemColumnIterator(QueryFilter.java:53)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:816)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:750)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:719)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:175)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>> >> >>>>>>
>> >> >>>>>>         at java.lang.Thread.run(Thread.java:636)
>> >> >>>>>>
>> >> >>>>>>  INFO 21:01:56,123 GC for ConcurrentMarkSweep: 2115 ms, 893240
>> >> >>>>>> reclaimed leaving 1093862712 used; max is 1211826176
>> >> >>>>>
>> >> >>>>> and
>> >> >>>>>>
>> >> >>>>>> ERROR [Hint delivery] 2010-04-19 21:57:07,089
>> CassandraDaemon.java
>> >> >>>>>> (line 78) Fatal exception in thread Thread[Hint delivery,5,main]
>> >> >>>>>>
>> >> >>>>>> java.lang.RuntimeException: java.lang.RuntimeException:
>> >> >>>>>> java.util.concurrent.ExecutionException:
>> >> >>>>>> java.lang.OutOfMemoryError: Java
>> >> >>>>>> heap space
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>> >> >>>>>>
>> >> >>>>>>         at java.lang.Thread.run(Thread.java:636)
>> >> >>>>>>
>> >> >>>>>> Caused by: java.lang.RuntimeException:
>> >> >>>>>> java.util.concurrent.ExecutionException:
>> >> >>>>>> java.lang.OutOfMemoryError: Java
>> >> >>>>>> heap space
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:209)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>> >> >>>>>>
>> >> >>>>>>         ... 1 more
>> >> >>>>>>
>> >> >>>>>> Caused by: java.util.concurrent.ExecutionException:
>> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>> >> >>>>>>
>> >> >>>>>>         at
>> java.util.concurrent.FutureTask.get(FutureTask.java:111)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:205)
>> >> >>>>>>
>> >> >>>>>>         ... 4 more
>> >> >>>>>>
>> >> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>> >> >>>>>>
>> >> >>>>>> ERROR [MESSAGE-DESERIALIZER-POOL:1] 2010-04-19 21:57:07,089
>> >> >>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor
>> >> >>>>>> futuretask
>> >> >>>>>>
>> >> >>>>>> java.util.concurrent.ExecutionException:
>> >> >>>>>> java.lang.OutOfMemoryError:
>> >> >>>>>> Java heap space
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>> >> >>>>>>
>> >> >>>>>>         at
>> java.util.concurrent.FutureTask.get(FutureTask.java:111)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>> >> >>>>>>
>> >> >>>>>>         at java.lang.Thread.run(Thread.java:636)
>> >> >>>>>>
>> >> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>> >> >>>>>>
>> >> >>>>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:57:07,089
>> >> >>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor
>> >> >>>>>> futuretask
>> >> >>>>>>
>> >> >>>>>> java.util.concurrent.ExecutionException:
>> >> >>>>>> java.lang.OutOfMemoryError:
>> >> >>>>>> Java heap space
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>> >> >>>>>>
>> >> >>>>>>         at
>> java.util.concurrent.FutureTask.get(FutureTask.java:111)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>> >> >>>>>>
>> >> >>>>>>         at
>> >> >>>>>>
>> >> >>>>>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>> >> >>>>>>
>> >> >>>>>>         at java.lang.Thread.run(Thread.java:636)
>> >> >>>>>>
>> >> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>> >> >>>>>>
>> >> >>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:56:29,572
>> >> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>> >> >>>>>> Thread[CACHETABLE-TIMER-1,5,main]
>> >> >>>>>>
>> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >> >>>>>>
>> >> >>>>>>         at java.util.HashMap.<init>(HashMap.java:226)        at
>> >> >>>>>>
>> >> >>>>>>
>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>> >> >>>>>>        at java.util.TimerThread.mainLoop(Timer.java:534)
>>  at
>> >> >>>>>> java.util.TimerThread.run(Timer.java:484)
>> >> >>>>>
>> >> >>>>>
>> >> >>>
>> >> >>
>> >> >
>> >> >
>> >
>> >
>>
>
>

Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Eric Evans <ee...@rackspace.com>.
On Tue, 2010-04-20 at 10:39 +0800, Ken Sandney wrote:
> Sorry I just don't know how to resolve this :)

http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts

> On Tue, Apr 20, 2010 at 10:37 AM, Jonathan Ellis <jb...@gmail.com>
> wrote:
> 
> > Ken, I linked you to the FAQ answering your problem in the first
> reply
> > you got.  Please don't hijack my replies to other people; that's
> rude. 

-- 
Eric Evans
eevans@rackspace.com


Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Ken Sandney <bl...@gmail.com>.
Sorry I just don't know how to resolve this :)

On Tue, Apr 20, 2010 at 10:37 AM, Jonathan Ellis <jb...@gmail.com> wrote:

> Ken, I linked you to the FAQ answering your problem in the first reply
> you got.  Please don't hijack my replies to other people; that's rude.
>
> On Mon, Apr 19, 2010 at 9:32 PM, Ken Sandney <bl...@gmail.com> wrote:
> > I am just running Cassandra on normal boxes, and grants 1GB of total 2GB
> to
> > Cassandra is reasonable I think. Can this problem be resolved by tuning
> the
> > thresholds described on this page , or just be waiting for the 0.7
> release
> > as Brandon mentioned?
> >
> > On Tue, Apr 20, 2010 at 10:15 AM, Jonathan Ellis <jb...@gmail.com>
> wrote:
> >>
> >> Schubert, I don't know if you saw this in the other thread referencing
> >> your slides:
> >>
> >> It looks like the slowdown doesn't hit until after several GCs,
> >> although it's hard to tell since the scale is different on the GC
> >> graph and the insert throughput ones.
> >>
> >> Perhaps this is compaction kicking in, not GCs?  Definitely the extra
> >> I/O + CPU load from compaction will cause a drop in throughput.
> >>
> >> On Mon, Apr 19, 2010 at 9:06 PM, Schubert Zhang <zs...@gmail.com>
> wrote:
> >> > -Xmx1G is too small.
> >> > In my cluster, 8GB ram on each node, and I grant 6GB to cassandra.
> >> >
> >> > Please see my test @
> >> > http://www.slideshare.net/schubertzhang/presentations
> >> >
> >> > –Memory, GC..., always to be the bottleneck and big issue of
> java-based
> >> > infrastructure software!
> >> >
> >> > References:
> >> > –http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts
> >> > –https://issues.apache.org/jira/browse/CASSANDRA-896
> >> > (LinkedBlockingQueue
> >> > issue, fixed in jdk-6u19)
> >> >
> >> > In fact, always when I using java-based infrastructure software, such
> as
> >> > Cassandra, Hadoop, HBase, etc, I am also pained about such memory/GC
> >> > issue
> >> > finally.
> >> >
> >> > Then, we should provide higher harware with more RAM (such as
> >> > 32GB~64GB),
> >> > more CPU cores (such as 8~16). And we still cannot control the
> >> > Out-Of-Memory-Error.
> >> >
> >> > I am thinking, maybe it is not right to leave the job of memory
> control
> >> > to
> >> > JVM.
> >> >
> >> > I have a long experience in telecom and embedded software in past ten
> >> > years,
> >> > where need robust programs and small RAM. I want to discuss following
> >> > ideas
> >> > with the community:
> >> >
> >> > 1. Manage the memory by ourselves: allocate objects/resource (memory)
> at
> >> > initiating phase, and assign instances at runtime.
> >> > 2. Reject the request when be short of resource, instead of throws
> OOME
> >> > and
> >> > exit (crash).
> >> >
> >> > 3. I know, it is not easy in java program.
> >> >
> >> > Schubert
> >> >
> >> > On Tue, Apr 20, 2010 at 9:40 AM, Ken Sandney <bl...@gmail.com>
> >> > wrote:
> >> >>
> >> >> here is my JVM options, by default, I didn't modify them, from
> >> >> cassandra.in.sh
> >> >>>
> >> >>> # Arguments to pass to the JVM
> >> >>>
> >> >>> JVM_OPTS=" \
> >> >>>
> >> >>>         -ea \
> >> >>>
> >> >>>         -Xms128M \
> >> >>>
> >> >>>         -Xmx1G \
> >> >>>
> >> >>>         -XX:TargetSurvivorRatio=90 \
> >> >>>
> >> >>>         -XX:+AggressiveOpts \
> >> >>>
> >> >>>         -XX:+UseParNewGC \
> >> >>>
> >> >>>         -XX:+UseConcMarkSweepGC \
> >> >>>
> >> >>>         -XX:+CMSParallelRemarkEnabled \
> >> >>>
> >> >>>         -XX:+HeapDumpOnOutOfMemoryError \
> >> >>>
> >> >>>         -XX:SurvivorRatio=128 \
> >> >>>
> >> >>>         -XX:MaxTenuringThreshold=0 \
> >> >>>
> >> >>>         -Dcom.sun.management.jmxremote.port=8080 \
> >> >>>
> >> >>>         -Dcom.sun.management.jmxremote.ssl=false \
> >> >>>
> >> >>>         -Dcom.sun.management.jmxremote.authenticate=false"
> >> >>
> >> >> and my box is normal pc with 2GB ram, Intel E3200  @ 2.40GHz. By the
> >> >> way,
> >> >> I am using the latest Sun JDK
> >> >> On Tue, Apr 20, 2010 at 9:33 AM, Schubert Zhang <zs...@gmail.com>
> >> >> wrote:
> >> >>>
> >> >>> Seems you should configure larger jvm-heap.
> >> >>>
> >> >>> On Tue, Apr 20, 2010 at 9:32 AM, Schubert Zhang <zs...@gmail.com>
> >> >>> wrote:
> >> >>>>
> >> >>>> Please also post your jvm-heap and GC options, i.e. the seting in
> >> >>>> cassandra.in.sh
> >> >>>> And what about you node hardware?
> >> >>>>
> >> >>>> On Tue, Apr 20, 2010 at 9:22 AM, Ken Sandney <bl...@gmail.com>
> >> >>>> wrote:
> >> >>>>>
> >> >>>>> Hi
> >> >>>>> I am doing a insert test with 9 nodes, the command:
> >> >>>>>>
> >> >>>>>> stress.py -n 1000000000 -t 1000 -c 10 -o insert -i 5 -d
> >> >>>>>> 10.0.0.1,10.0.0.2.....
> >> >>>>>
> >> >>>>> and  5 of the 9 nodes were cashed, only about 6'500'000 rows were
> >> >>>>> inserted
> >> >>>>> I checked out the system.log and seems the reason are 'out of
> >> >>>>> memory'.
> >> >>>>> I don't if this had something to do with my settings.
> >> >>>>> Any idea about this?
> >> >>>>> Thank you, and the following are the errors from system.log
> >> >>>>>
> >> >>>>>>
> >> >>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 20:43:14,013
> >> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
> >> >>>>>> Thread[CACHETABLE-TIMER-1,5,main]
> >> >>>>>>
> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
> >> >>>>>>
> >> >>>>>>         at java.util.TimerThread.mainLoop(Timer.java:512)
> >> >>>>>>
> >> >>>>>>         at java.util.TimerThread.run(Timer.java:462)
> >> >>>>>>
> >> >>>>>> ERROR [ROW-MUTATION-STAGE:9] 2010-04-19 20:43:27,932
> >> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
> >> >>>>>> Thread[ROW-MUTATION-STAGE:9,5,main]
> >> >>>>>>
> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:893)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(ConcurrentSkipListMap.java:1893)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:192)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:118)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:108)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.RowMutationSerializer.defreezeTheMaps(RowMutation.java:359)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:369)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:322)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:45)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:40)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >> >>>>>>
> >> >>>>>>         at java.lang.Thread.run(Thread.java:619)
> >> >>>>>
> >> >>>>> and another
> >> >>>>>>
> >> >>>>>>  INFO [GC inspection] 2010-04-19 21:13:09,034 GCInspector.java
> >> >>>>>> (line
> >> >>>>>> 110) GC for ConcurrentMarkSweep: 2016 ms, 1239096 reclaimed
> leaving
> >> >>>>>> 1094238944 used; max is 1211826176
> >> >>>>>>
> >> >>>>>> ERROR [Thread-14] 2010-04-19 21:23:18,508 CassandraDaemon.java
> >> >>>>>> (line
> >> >>>>>> 78) Fatal exception in thread Thread[Thread-14,5,main]
> >> >>>>>>
> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>> sun.nio.ch.Util.releaseTemporaryDirectBuffer(Util.java:67)
> >> >>>>>>
> >> >>>>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:212)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>> sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:176)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>> sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
> >> >>>>>>
> >> >>>>>>         at java.io.InputStream.read(InputStream.java:85)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>> sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:64)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>> java.io.DataInputStream.readInt(DataInputStream.java:370)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:70)
> >> >>>>>>
> >> >>>>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:23:18,514
> >> >>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor
> >> >>>>>> futuretask
> >> >>>>>>
> >> >>>>>> java.util.concurrent.ExecutionException:
> >> >>>>>> java.lang.OutOfMemoryError:
> >> >>>>>> Java heap space
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> >> >>>>>>
> >> >>>>>>         at
> java.util.concurrent.FutureTask.get(FutureTask.java:83)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >> >>>>>>
> >> >>>>>>         at java.lang.Thread.run(Thread.java:619)
> >> >>>>>>
> >> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
> >> >>>>>>
> >> >>>>>>  INFO [FLUSH-WRITER-POOL:1] 2010-04-19 21:23:25,600 Memtable.java
> >> >>>>>> (line 162) Completed flushing
> >> >>>>>> /m/cassandra/data/Keyspace1/Standard1-623-Data.db
> >> >>>>>>
> >> >>>>>> ERROR [Thread-13] 2010-04-19 21:23:18,514 CassandraDaemon.java
> >> >>>>>> (line
> >> >>>>>> 78) Fatal exception in thread Thread[Thread-13,5,main]
> >> >>>>>>
> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >> >>>>>>
> >> >>>>>> ERROR [Thread-15] 2010-04-19 21:23:18,514 CassandraDaemon.java
> >> >>>>>> (line
> >> >>>>>> 78) Fatal exception in thread Thread[Thread-15,5,main]
> >> >>>>>>
> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >> >>>>>>
> >> >>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:23:18,514
> >> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
> >> >>>>>> Thread[CACHETABLE-TIMER-1,5,main]
> >> >>>>>>
> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >> >>>>>
> >> >>>>>
> >> >>>>> and
> >> >>>>>>
> >> >>>>>>  INFO 21:00:31,319 GC for ConcurrentMarkSweep: 1417 ms, 206216
> >> >>>>>> reclaimed leaving 1094527752 used; max is 1211826176
> >> >>>>>>
> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >> >>>>>>
> >> >>>>>> Dumping heap to java_pid28670.hprof ...
> >> >>>>>>
> >> >>>>>>  INFO 21:01:23,882 GC for ConcurrentMarkSweep: 2100 ms, 734008
> >> >>>>>> reclaimed leaving 1093996648 used; max is 1211826176
> >> >>>>>>
> >> >>>>>> Heap dump file created [1095841554 bytes in 12.960 secs]
> >> >>>>>>
> >> >>>>>>  INFO 21:01:45,082 GC for ConcurrentMarkSweep: 2089 ms, 769968
> >> >>>>>> reclaimed leaving 1093960776 used; max is 1211826176
> >> >>>>>>
> >> >>>>>> ERROR 21:01:49,559 Fatal exception in thread Thread[Hint
> >> >>>>>> delivery,5,main]
> >> >>>>>>
> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >> >>>>>>
> >> >>>>>>         at java.util.Arrays.copyOf(Arrays.java:2772)
> >> >>>>>>
> >> >>>>>>         at java.util.Arrays.copyOf(Arrays.java:2746)
> >> >>>>>>
> >> >>>>>>         at java.util.ArrayList.ensureCapacity(ArrayList.java:187)
> >> >>>>>>
> >> >>>>>>         at java.util.ArrayList.add(ArrayList.java:378)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> java.util.concurrent.ConcurrentSkipListMap.toList(ConcurrentSkipListMap.java:2341)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> java.util.concurrent.ConcurrentSkipListMap$Values.toArray(ConcurrentSkipListMap.java:2445)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.Memtable.getSliceIterator(Memtable.java:207)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.filter.SliceQueryFilter.getMemColumnIterator(SliceQueryFilter.java:58)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.filter.QueryFilter.getMemColumnIterator(QueryFilter.java:53)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:816)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:750)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:719)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:175)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
> >> >>>>>>
> >> >>>>>>         at java.lang.Thread.run(Thread.java:636)
> >> >>>>>>
> >> >>>>>>  INFO 21:01:56,123 GC for ConcurrentMarkSweep: 2115 ms, 893240
> >> >>>>>> reclaimed leaving 1093862712 used; max is 1211826176
> >> >>>>>
> >> >>>>> and
> >> >>>>>>
> >> >>>>>> ERROR [Hint delivery] 2010-04-19 21:57:07,089
> CassandraDaemon.java
> >> >>>>>> (line 78) Fatal exception in thread Thread[Hint delivery,5,main]
> >> >>>>>>
> >> >>>>>> java.lang.RuntimeException: java.lang.RuntimeException:
> >> >>>>>> java.util.concurrent.ExecutionException:
> >> >>>>>> java.lang.OutOfMemoryError: Java
> >> >>>>>> heap space
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
> >> >>>>>>
> >> >>>>>>         at java.lang.Thread.run(Thread.java:636)
> >> >>>>>>
> >> >>>>>> Caused by: java.lang.RuntimeException:
> >> >>>>>> java.util.concurrent.ExecutionException:
> >> >>>>>> java.lang.OutOfMemoryError: Java
> >> >>>>>> heap space
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:209)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
> >> >>>>>>
> >> >>>>>>         ... 1 more
> >> >>>>>>
> >> >>>>>> Caused by: java.util.concurrent.ExecutionException:
> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> >> >>>>>>
> >> >>>>>>         at
> java.util.concurrent.FutureTask.get(FutureTask.java:111)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:205)
> >> >>>>>>
> >> >>>>>>         ... 4 more
> >> >>>>>>
> >> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
> >> >>>>>>
> >> >>>>>> ERROR [MESSAGE-DESERIALIZER-POOL:1] 2010-04-19 21:57:07,089
> >> >>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor
> >> >>>>>> futuretask
> >> >>>>>>
> >> >>>>>> java.util.concurrent.ExecutionException:
> >> >>>>>> java.lang.OutOfMemoryError:
> >> >>>>>> Java heap space
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> >> >>>>>>
> >> >>>>>>         at
> java.util.concurrent.FutureTask.get(FutureTask.java:111)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> >> >>>>>>
> >> >>>>>>         at java.lang.Thread.run(Thread.java:636)
> >> >>>>>>
> >> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
> >> >>>>>>
> >> >>>>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:57:07,089
> >> >>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor
> >> >>>>>> futuretask
> >> >>>>>>
> >> >>>>>> java.util.concurrent.ExecutionException:
> >> >>>>>> java.lang.OutOfMemoryError:
> >> >>>>>> Java heap space
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> >> >>>>>>
> >> >>>>>>         at
> java.util.concurrent.FutureTask.get(FutureTask.java:111)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
> >> >>>>>>
> >> >>>>>>         at
> >> >>>>>>
> >> >>>>>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> >> >>>>>>
> >> >>>>>>         at java.lang.Thread.run(Thread.java:636)
> >> >>>>>>
> >> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
> >> >>>>>>
> >> >>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:56:29,572
> >> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
> >> >>>>>> Thread[CACHETABLE-TIMER-1,5,main]
> >> >>>>>>
> >> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >> >>>>>>
> >> >>>>>>         at java.util.HashMap.<init>(HashMap.java:226)        at
> >> >>>>>>
> >> >>>>>>
> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
> >> >>>>>>        at java.util.TimerThread.mainLoop(Timer.java:534)
>  at
> >> >>>>>> java.util.TimerThread.run(Timer.java:484)
> >> >>>>>
> >> >>>>>
> >> >>>
> >> >>
> >> >
> >> >
> >
> >
>

Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Jonathan Ellis <jb...@gmail.com>.
Ken, I linked you to the FAQ answering your problem in the first reply
you got.  Please don't hijack my replies to other people; that's rude.

On Mon, Apr 19, 2010 at 9:32 PM, Ken Sandney <bl...@gmail.com> wrote:
> I am just running Cassandra on normal boxes, and grants 1GB of total 2GB to
> Cassandra is reasonable I think. Can this problem be resolved by tuning the
> thresholds described on this page , or just be waiting for the 0.7 release
> as Brandon mentioned?
>
> On Tue, Apr 20, 2010 at 10:15 AM, Jonathan Ellis <jb...@gmail.com> wrote:
>>
>> Schubert, I don't know if you saw this in the other thread referencing
>> your slides:
>>
>> It looks like the slowdown doesn't hit until after several GCs,
>> although it's hard to tell since the scale is different on the GC
>> graph and the insert throughput ones.
>>
>> Perhaps this is compaction kicking in, not GCs?  Definitely the extra
>> I/O + CPU load from compaction will cause a drop in throughput.
>>
>> On Mon, Apr 19, 2010 at 9:06 PM, Schubert Zhang <zs...@gmail.com> wrote:
>> > -Xmx1G is too small.
>> > In my cluster, 8GB ram on each node, and I grant 6GB to cassandra.
>> >
>> > Please see my test @
>> > http://www.slideshare.net/schubertzhang/presentations
>> >
>> > –Memory, GC..., always to be the bottleneck and big issue of java-based
>> > infrastructure software!
>> >
>> > References:
>> > –http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts
>> > –https://issues.apache.org/jira/browse/CASSANDRA-896
>> > (LinkedBlockingQueue
>> > issue, fixed in jdk-6u19)
>> >
>> > In fact, always when I using java-based infrastructure software, such as
>> > Cassandra, Hadoop, HBase, etc, I am also pained about such memory/GC
>> > issue
>> > finally.
>> >
>> > Then, we should provide higher harware with more RAM (such as
>> > 32GB~64GB),
>> > more CPU cores (such as 8~16). And we still cannot control the
>> > Out-Of-Memory-Error.
>> >
>> > I am thinking, maybe it is not right to leave the job of memory control
>> > to
>> > JVM.
>> >
>> > I have a long experience in telecom and embedded software in past ten
>> > years,
>> > where need robust programs and small RAM. I want to discuss following
>> > ideas
>> > with the community:
>> >
>> > 1. Manage the memory by ourselves: allocate objects/resource (memory) at
>> > initiating phase, and assign instances at runtime.
>> > 2. Reject the request when be short of resource, instead of throws OOME
>> > and
>> > exit (crash).
>> >
>> > 3. I know, it is not easy in java program.
>> >
>> > Schubert
>> >
>> > On Tue, Apr 20, 2010 at 9:40 AM, Ken Sandney <bl...@gmail.com>
>> > wrote:
>> >>
>> >> here is my JVM options, by default, I didn't modify them, from
>> >> cassandra.in.sh
>> >>>
>> >>> # Arguments to pass to the JVM
>> >>>
>> >>> JVM_OPTS=" \
>> >>>
>> >>>         -ea \
>> >>>
>> >>>         -Xms128M \
>> >>>
>> >>>         -Xmx1G \
>> >>>
>> >>>         -XX:TargetSurvivorRatio=90 \
>> >>>
>> >>>         -XX:+AggressiveOpts \
>> >>>
>> >>>         -XX:+UseParNewGC \
>> >>>
>> >>>         -XX:+UseConcMarkSweepGC \
>> >>>
>> >>>         -XX:+CMSParallelRemarkEnabled \
>> >>>
>> >>>         -XX:+HeapDumpOnOutOfMemoryError \
>> >>>
>> >>>         -XX:SurvivorRatio=128 \
>> >>>
>> >>>         -XX:MaxTenuringThreshold=0 \
>> >>>
>> >>>         -Dcom.sun.management.jmxremote.port=8080 \
>> >>>
>> >>>         -Dcom.sun.management.jmxremote.ssl=false \
>> >>>
>> >>>         -Dcom.sun.management.jmxremote.authenticate=false"
>> >>
>> >> and my box is normal pc with 2GB ram, Intel E3200  @ 2.40GHz. By the
>> >> way,
>> >> I am using the latest Sun JDK
>> >> On Tue, Apr 20, 2010 at 9:33 AM, Schubert Zhang <zs...@gmail.com>
>> >> wrote:
>> >>>
>> >>> Seems you should configure larger jvm-heap.
>> >>>
>> >>> On Tue, Apr 20, 2010 at 9:32 AM, Schubert Zhang <zs...@gmail.com>
>> >>> wrote:
>> >>>>
>> >>>> Please also post your jvm-heap and GC options, i.e. the seting in
>> >>>> cassandra.in.sh
>> >>>> And what about you node hardware?
>> >>>>
>> >>>> On Tue, Apr 20, 2010 at 9:22 AM, Ken Sandney <bl...@gmail.com>
>> >>>> wrote:
>> >>>>>
>> >>>>> Hi
>> >>>>> I am doing a insert test with 9 nodes, the command:
>> >>>>>>
>> >>>>>> stress.py -n 1000000000 -t 1000 -c 10 -o insert -i 5 -d
>> >>>>>> 10.0.0.1,10.0.0.2.....
>> >>>>>
>> >>>>> and  5 of the 9 nodes were cashed, only about 6'500'000 rows were
>> >>>>> inserted
>> >>>>> I checked out the system.log and seems the reason are 'out of
>> >>>>> memory'.
>> >>>>> I don't if this had something to do with my settings.
>> >>>>> Any idea about this?
>> >>>>> Thank you, and the following are the errors from system.log
>> >>>>>
>> >>>>>>
>> >>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 20:43:14,013
>> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>> >>>>>> Thread[CACHETABLE-TIMER-1,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>> >>>>>>
>> >>>>>>         at java.util.TimerThread.mainLoop(Timer.java:512)
>> >>>>>>
>> >>>>>>         at java.util.TimerThread.run(Timer.java:462)
>> >>>>>>
>> >>>>>> ERROR [ROW-MUTATION-STAGE:9] 2010-04-19 20:43:27,932
>> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>> >>>>>> Thread[ROW-MUTATION-STAGE:9,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:893)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(ConcurrentSkipListMap.java:1893)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:192)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:118)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:108)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.RowMutationSerializer.defreezeTheMaps(RowMutation.java:359)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:369)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:322)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:45)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:40)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >>>>>>
>> >>>>>>         at java.lang.Thread.run(Thread.java:619)
>> >>>>>
>> >>>>> and another
>> >>>>>>
>> >>>>>>  INFO [GC inspection] 2010-04-19 21:13:09,034 GCInspector.java
>> >>>>>> (line
>> >>>>>> 110) GC for ConcurrentMarkSweep: 2016 ms, 1239096 reclaimed leaving
>> >>>>>> 1094238944 used; max is 1211826176
>> >>>>>>
>> >>>>>> ERROR [Thread-14] 2010-04-19 21:23:18,508 CassandraDaemon.java
>> >>>>>> (line
>> >>>>>> 78) Fatal exception in thread Thread[Thread-14,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>> sun.nio.ch.Util.releaseTemporaryDirectBuffer(Util.java:67)
>> >>>>>>
>> >>>>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:212)
>> >>>>>>
>> >>>>>>         at
>> >>>>>> sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:176)
>> >>>>>>
>> >>>>>>         at
>> >>>>>> sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
>> >>>>>>
>> >>>>>>         at java.io.InputStream.read(InputStream.java:85)
>> >>>>>>
>> >>>>>>         at
>> >>>>>> sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:64)
>> >>>>>>
>> >>>>>>         at
>> >>>>>> java.io.DataInputStream.readInt(DataInputStream.java:370)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:70)
>> >>>>>>
>> >>>>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:23:18,514
>> >>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor
>> >>>>>> futuretask
>> >>>>>>
>> >>>>>> java.util.concurrent.ExecutionException:
>> >>>>>> java.lang.OutOfMemoryError:
>> >>>>>> Java heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>> >>>>>>
>> >>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >>>>>>
>> >>>>>>         at java.lang.Thread.run(Thread.java:619)
>> >>>>>>
>> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>>  INFO [FLUSH-WRITER-POOL:1] 2010-04-19 21:23:25,600 Memtable.java
>> >>>>>> (line 162) Completed flushing
>> >>>>>> /m/cassandra/data/Keyspace1/Standard1-623-Data.db
>> >>>>>>
>> >>>>>> ERROR [Thread-13] 2010-04-19 21:23:18,514 CassandraDaemon.java
>> >>>>>> (line
>> >>>>>> 78) Fatal exception in thread Thread[Thread-13,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>> ERROR [Thread-15] 2010-04-19 21:23:18,514 CassandraDaemon.java
>> >>>>>> (line
>> >>>>>> 78) Fatal exception in thread Thread[Thread-15,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:23:18,514
>> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>> >>>>>> Thread[CACHETABLE-TIMER-1,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>
>> >>>>>
>> >>>>> and
>> >>>>>>
>> >>>>>>  INFO 21:00:31,319 GC for ConcurrentMarkSweep: 1417 ms, 206216
>> >>>>>> reclaimed leaving 1094527752 used; max is 1211826176
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>> Dumping heap to java_pid28670.hprof ...
>> >>>>>>
>> >>>>>>  INFO 21:01:23,882 GC for ConcurrentMarkSweep: 2100 ms, 734008
>> >>>>>> reclaimed leaving 1093996648 used; max is 1211826176
>> >>>>>>
>> >>>>>> Heap dump file created [1095841554 bytes in 12.960 secs]
>> >>>>>>
>> >>>>>>  INFO 21:01:45,082 GC for ConcurrentMarkSweep: 2089 ms, 769968
>> >>>>>> reclaimed leaving 1093960776 used; max is 1211826176
>> >>>>>>
>> >>>>>> ERROR 21:01:49,559 Fatal exception in thread Thread[Hint
>> >>>>>> delivery,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>>         at java.util.Arrays.copyOf(Arrays.java:2772)
>> >>>>>>
>> >>>>>>         at java.util.Arrays.copyOf(Arrays.java:2746)
>> >>>>>>
>> >>>>>>         at java.util.ArrayList.ensureCapacity(ArrayList.java:187)
>> >>>>>>
>> >>>>>>         at java.util.ArrayList.add(ArrayList.java:378)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ConcurrentSkipListMap.toList(ConcurrentSkipListMap.java:2341)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ConcurrentSkipListMap$Values.toArray(ConcurrentSkipListMap.java:2445)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.Memtable.getSliceIterator(Memtable.java:207)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.filter.SliceQueryFilter.getMemColumnIterator(SliceQueryFilter.java:58)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.filter.QueryFilter.getMemColumnIterator(QueryFilter.java:53)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:816)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:750)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:719)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:175)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>> >>>>>>
>> >>>>>>         at java.lang.Thread.run(Thread.java:636)
>> >>>>>>
>> >>>>>>  INFO 21:01:56,123 GC for ConcurrentMarkSweep: 2115 ms, 893240
>> >>>>>> reclaimed leaving 1093862712 used; max is 1211826176
>> >>>>>
>> >>>>> and
>> >>>>>>
>> >>>>>> ERROR [Hint delivery] 2010-04-19 21:57:07,089 CassandraDaemon.java
>> >>>>>> (line 78) Fatal exception in thread Thread[Hint delivery,5,main]
>> >>>>>>
>> >>>>>> java.lang.RuntimeException: java.lang.RuntimeException:
>> >>>>>> java.util.concurrent.ExecutionException:
>> >>>>>> java.lang.OutOfMemoryError: Java
>> >>>>>> heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>> >>>>>>
>> >>>>>>         at java.lang.Thread.run(Thread.java:636)
>> >>>>>>
>> >>>>>> Caused by: java.lang.RuntimeException:
>> >>>>>> java.util.concurrent.ExecutionException:
>> >>>>>> java.lang.OutOfMemoryError: Java
>> >>>>>> heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:209)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>> >>>>>>
>> >>>>>>         ... 1 more
>> >>>>>>
>> >>>>>> Caused by: java.util.concurrent.ExecutionException:
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>> >>>>>>
>> >>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:205)
>> >>>>>>
>> >>>>>>         ... 4 more
>> >>>>>>
>> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>> ERROR [MESSAGE-DESERIALIZER-POOL:1] 2010-04-19 21:57:07,089
>> >>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor
>> >>>>>> futuretask
>> >>>>>>
>> >>>>>> java.util.concurrent.ExecutionException:
>> >>>>>> java.lang.OutOfMemoryError:
>> >>>>>> Java heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>> >>>>>>
>> >>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>> >>>>>>
>> >>>>>>         at java.lang.Thread.run(Thread.java:636)
>> >>>>>>
>> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:57:07,089
>> >>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor
>> >>>>>> futuretask
>> >>>>>>
>> >>>>>> java.util.concurrent.ExecutionException:
>> >>>>>> java.lang.OutOfMemoryError:
>> >>>>>> Java heap space
>> >>>>>>
>> >>>>>>         at
>> >>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>> >>>>>>
>> >>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>> >>>>>>
>> >>>>>>         at
>> >>>>>>
>> >>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>> >>>>>>
>> >>>>>>         at java.lang.Thread.run(Thread.java:636)
>> >>>>>>
>> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:56:29,572
>> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>> >>>>>> Thread[CACHETABLE-TIMER-1,5,main]
>> >>>>>>
>> >>>>>> java.lang.OutOfMemoryError: Java heap space
>> >>>>>>
>> >>>>>>         at java.util.HashMap.<init>(HashMap.java:226)        at
>> >>>>>>
>> >>>>>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>> >>>>>>        at java.util.TimerThread.mainLoop(Timer.java:534)        at
>> >>>>>> java.util.TimerThread.run(Timer.java:484)
>> >>>>>
>> >>>>>
>> >>>
>> >>
>> >
>> >
>
>

Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Ken Sandney <bl...@gmail.com>.
I am just running Cassandra on normal boxes, and grants 1GB of total 2GB to
Cassandra is reasonable I think. Can this problem be resolved by tuning the
thresholds described on this
page<http://wiki.apache.org/cassandra/MemtableThresholds> ,
or just be waiting for the 0.7 release as Brandon mentioned?

On Tue, Apr 20, 2010 at 10:15 AM, Jonathan Ellis <jb...@gmail.com> wrote:

> Schubert, I don't know if you saw this in the other thread referencing
> your slides:
>
> It looks like the slowdown doesn't hit until after several GCs,
> although it's hard to tell since the scale is different on the GC
> graph and the insert throughput ones.
>
> Perhaps this is compaction kicking in, not GCs?  Definitely the extra
> I/O + CPU load from compaction will cause a drop in throughput.
>
> On Mon, Apr 19, 2010 at 9:06 PM, Schubert Zhang <zs...@gmail.com> wrote:
> > -Xmx1G is too small.
> > In my cluster, 8GB ram on each node, and I grant 6GB to cassandra.
> >
> > Please see my test @
> http://www.slideshare.net/schubertzhang/presentations
> >
> > –Memory, GC..., always to be the bottleneck and big issue of java-based
> > infrastructure software!
> >
> > References:
> > –http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts
> > –https://issues.apache.org/jira/browse/CASSANDRA-896
> (LinkedBlockingQueue
> > issue, fixed in jdk-6u19)
> >
> > In fact, always when I using java-based infrastructure software, such as
> > Cassandra, Hadoop, HBase, etc, I am also pained about such memory/GC
> issue
> > finally.
> >
> > Then, we should provide higher harware with more RAM (such as 32GB~64GB),
> > more CPU cores (such as 8~16). And we still cannot control the
> > Out-Of-Memory-Error.
> >
> > I am thinking, maybe it is not right to leave the job of memory control
> to
> > JVM.
> >
> > I have a long experience in telecom and embedded software in past ten
> years,
> > where need robust programs and small RAM. I want to discuss following
> ideas
> > with the community:
> >
> > 1. Manage the memory by ourselves: allocate objects/resource (memory) at
> > initiating phase, and assign instances at runtime.
> > 2. Reject the request when be short of resource, instead of throws OOME
> and
> > exit (crash).
> >
> > 3. I know, it is not easy in java program.
> >
> > Schubert
> >
> > On Tue, Apr 20, 2010 at 9:40 AM, Ken Sandney <bl...@gmail.com>
> wrote:
> >>
> >> here is my JVM options, by default, I didn't modify them, from
> >> cassandra.in.sh
> >>>
> >>> # Arguments to pass to the JVM
> >>>
> >>> JVM_OPTS=" \
> >>>
> >>>         -ea \
> >>>
> >>>         -Xms128M \
> >>>
> >>>         -Xmx1G \
> >>>
> >>>         -XX:TargetSurvivorRatio=90 \
> >>>
> >>>         -XX:+AggressiveOpts \
> >>>
> >>>         -XX:+UseParNewGC \
> >>>
> >>>         -XX:+UseConcMarkSweepGC \
> >>>
> >>>         -XX:+CMSParallelRemarkEnabled \
> >>>
> >>>         -XX:+HeapDumpOnOutOfMemoryError \
> >>>
> >>>         -XX:SurvivorRatio=128 \
> >>>
> >>>         -XX:MaxTenuringThreshold=0 \
> >>>
> >>>         -Dcom.sun.management.jmxremote.port=8080 \
> >>>
> >>>         -Dcom.sun.management.jmxremote.ssl=false \
> >>>
> >>>         -Dcom.sun.management.jmxremote.authenticate=false"
> >>
> >> and my box is normal pc with 2GB ram, Intel E3200  @ 2.40GHz. By the
> way,
> >> I am using the latest Sun JDK
> >> On Tue, Apr 20, 2010 at 9:33 AM, Schubert Zhang <zs...@gmail.com>
> wrote:
> >>>
> >>> Seems you should configure larger jvm-heap.
> >>>
> >>> On Tue, Apr 20, 2010 at 9:32 AM, Schubert Zhang <zs...@gmail.com>
> >>> wrote:
> >>>>
> >>>> Please also post your jvm-heap and GC options, i.e. the seting in
> >>>> cassandra.in.sh
> >>>> And what about you node hardware?
> >>>>
> >>>> On Tue, Apr 20, 2010 at 9:22 AM, Ken Sandney <bl...@gmail.com>
> >>>> wrote:
> >>>>>
> >>>>> Hi
> >>>>> I am doing a insert test with 9 nodes, the command:
> >>>>>>
> >>>>>> stress.py -n 1000000000 -t 1000 -c 10 -o insert -i 5 -d
> >>>>>> 10.0.0.1,10.0.0.2.....
> >>>>>
> >>>>> and  5 of the 9 nodes were cashed, only about 6'500'000 rows were
> >>>>> inserted
> >>>>> I checked out the system.log and seems the reason are 'out of
> memory'.
> >>>>> I don't if this had something to do with my settings.
> >>>>> Any idea about this?
> >>>>> Thank you, and the following are the errors from system.log
> >>>>>
> >>>>>>
> >>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 20:43:14,013
> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
> >>>>>> Thread[CACHETABLE-TIMER-1,5,main]
> >>>>>>
> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
> >>>>>>
> >>>>>>         at java.util.TimerThread.mainLoop(Timer.java:512)
> >>>>>>
> >>>>>>         at java.util.TimerThread.run(Timer.java:462)
> >>>>>>
> >>>>>> ERROR [ROW-MUTATION-STAGE:9] 2010-04-19 20:43:27,932
> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
> >>>>>> Thread[ROW-MUTATION-STAGE:9,5,main]
> >>>>>>
> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >>>>>>
> >>>>>>         at
> >>>>>>
> java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:893)
> >>>>>>
> >>>>>>         at
> >>>>>>
> java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(ConcurrentSkipListMap.java:1893)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:192)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:118)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:108)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.RowMutationSerializer.defreezeTheMaps(RowMutation.java:359)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:369)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:322)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:45)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:40)
> >>>>>>
> >>>>>>         at
> >>>>>>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >>>>>>
> >>>>>>         at
> >>>>>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >>>>>>
> >>>>>>         at java.lang.Thread.run(Thread.java:619)
> >>>>>
> >>>>> and another
> >>>>>>
> >>>>>>  INFO [GC inspection] 2010-04-19 21:13:09,034 GCInspector.java (line
> >>>>>> 110) GC for ConcurrentMarkSweep: 2016 ms, 1239096 reclaimed leaving
> >>>>>> 1094238944 used; max is 1211826176
> >>>>>>
> >>>>>> ERROR [Thread-14] 2010-04-19 21:23:18,508 CassandraDaemon.java (line
> >>>>>> 78) Fatal exception in thread Thread[Thread-14,5,main]
> >>>>>>
> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >>>>>>
> >>>>>>         at
> sun.nio.ch.Util.releaseTemporaryDirectBuffer(Util.java:67)
> >>>>>>
> >>>>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:212)
> >>>>>>
> >>>>>>         at
> >>>>>> sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
> >>>>>>
> >>>>>>         at
> >>>>>>
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:176)
> >>>>>>
> >>>>>>         at
> >>>>>> sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
> >>>>>>
> >>>>>>         at java.io.InputStream.read(InputStream.java:85)
> >>>>>>
> >>>>>>         at
> >>>>>> sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:64)
> >>>>>>
> >>>>>>         at java.io.DataInputStream.readInt(DataInputStream.java:370)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:70)
> >>>>>>
> >>>>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:23:18,514
> >>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor
> futuretask
> >>>>>>
> >>>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
> >>>>>> Java heap space
> >>>>>>
> >>>>>>         at
> >>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> >>>>>>
> >>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
> >>>>>>
> >>>>>>         at
> >>>>>>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
> >>>>>>
> >>>>>>         at
> >>>>>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >>>>>>
> >>>>>>         at java.lang.Thread.run(Thread.java:619)
> >>>>>>
> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
> >>>>>>
> >>>>>>  INFO [FLUSH-WRITER-POOL:1] 2010-04-19 21:23:25,600 Memtable.java
> >>>>>> (line 162) Completed flushing
> >>>>>> /m/cassandra/data/Keyspace1/Standard1-623-Data.db
> >>>>>>
> >>>>>> ERROR [Thread-13] 2010-04-19 21:23:18,514 CassandraDaemon.java (line
> >>>>>> 78) Fatal exception in thread Thread[Thread-13,5,main]
> >>>>>>
> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >>>>>>
> >>>>>> ERROR [Thread-15] 2010-04-19 21:23:18,514 CassandraDaemon.java (line
> >>>>>> 78) Fatal exception in thread Thread[Thread-15,5,main]
> >>>>>>
> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >>>>>>
> >>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:23:18,514
> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
> >>>>>> Thread[CACHETABLE-TIMER-1,5,main]
> >>>>>>
> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >>>>>
> >>>>>
> >>>>> and
> >>>>>>
> >>>>>>  INFO 21:00:31,319 GC for ConcurrentMarkSweep: 1417 ms, 206216
> >>>>>> reclaimed leaving 1094527752 used; max is 1211826176
> >>>>>>
> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >>>>>>
> >>>>>> Dumping heap to java_pid28670.hprof ...
> >>>>>>
> >>>>>>  INFO 21:01:23,882 GC for ConcurrentMarkSweep: 2100 ms, 734008
> >>>>>> reclaimed leaving 1093996648 used; max is 1211826176
> >>>>>>
> >>>>>> Heap dump file created [1095841554 bytes in 12.960 secs]
> >>>>>>
> >>>>>>  INFO 21:01:45,082 GC for ConcurrentMarkSweep: 2089 ms, 769968
> >>>>>> reclaimed leaving 1093960776 used; max is 1211826176
> >>>>>>
> >>>>>> ERROR 21:01:49,559 Fatal exception in thread Thread[Hint
> >>>>>> delivery,5,main]
> >>>>>>
> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >>>>>>
> >>>>>>         at java.util.Arrays.copyOf(Arrays.java:2772)
> >>>>>>
> >>>>>>         at java.util.Arrays.copyOf(Arrays.java:2746)
> >>>>>>
> >>>>>>         at java.util.ArrayList.ensureCapacity(ArrayList.java:187)
> >>>>>>
> >>>>>>         at java.util.ArrayList.add(ArrayList.java:378)
> >>>>>>
> >>>>>>         at
> >>>>>>
> java.util.concurrent.ConcurrentSkipListMap.toList(ConcurrentSkipListMap.java:2341)
> >>>>>>
> >>>>>>         at
> >>>>>>
> java.util.concurrent.ConcurrentSkipListMap$Values.toArray(ConcurrentSkipListMap.java:2445)
> >>>>>>
> >>>>>>         at
> >>>>>> org.apache.cassandra.db.Memtable.getSliceIterator(Memtable.java:207)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.filter.SliceQueryFilter.getMemColumnIterator(SliceQueryFilter.java:58)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.filter.QueryFilter.getMemColumnIterator(QueryFilter.java:53)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:816)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:750)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:719)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:175)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
> >>>>>>
> >>>>>>         at java.lang.Thread.run(Thread.java:636)
> >>>>>>
> >>>>>>  INFO 21:01:56,123 GC for ConcurrentMarkSweep: 2115 ms, 893240
> >>>>>> reclaimed leaving 1093862712 used; max is 1211826176
> >>>>>
> >>>>> and
> >>>>>>
> >>>>>> ERROR [Hint delivery] 2010-04-19 21:57:07,089 CassandraDaemon.java
> >>>>>> (line 78) Fatal exception in thread Thread[Hint delivery,5,main]
> >>>>>>
> >>>>>> java.lang.RuntimeException: java.lang.RuntimeException:
> >>>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
> Java
> >>>>>> heap space
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
> >>>>>>
> >>>>>>         at java.lang.Thread.run(Thread.java:636)
> >>>>>>
> >>>>>> Caused by: java.lang.RuntimeException:
> >>>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
> Java
> >>>>>> heap space
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:209)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
> >>>>>>
> >>>>>>         ... 1 more
> >>>>>>
> >>>>>> Caused by: java.util.concurrent.ExecutionException:
> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >>>>>>
> >>>>>>         at
> >>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> >>>>>>
> >>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:205)
> >>>>>>
> >>>>>>         ... 4 more
> >>>>>>
> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
> >>>>>>
> >>>>>> ERROR [MESSAGE-DESERIALIZER-POOL:1] 2010-04-19 21:57:07,089
> >>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor
> futuretask
> >>>>>>
> >>>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
> >>>>>> Java heap space
> >>>>>>
> >>>>>>         at
> >>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> >>>>>>
> >>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
> >>>>>>
> >>>>>>         at
> >>>>>>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
> >>>>>>
> >>>>>>         at
> >>>>>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> >>>>>>
> >>>>>>         at java.lang.Thread.run(Thread.java:636)
> >>>>>>
> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
> >>>>>>
> >>>>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:57:07,089
> >>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor
> futuretask
> >>>>>>
> >>>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
> >>>>>> Java heap space
> >>>>>>
> >>>>>>         at
> >>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> >>>>>>
> >>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
> >>>>>>
> >>>>>>         at
> >>>>>>
> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
> >>>>>>
> >>>>>>         at
> >>>>>>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
> >>>>>>
> >>>>>>         at
> >>>>>>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> >>>>>>
> >>>>>>         at java.lang.Thread.run(Thread.java:636)
> >>>>>>
> >>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
> >>>>>>
> >>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:56:29,572
> >>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
> >>>>>> Thread[CACHETABLE-TIMER-1,5,main]
> >>>>>>
> >>>>>> java.lang.OutOfMemoryError: Java heap space
> >>>>>>
> >>>>>>         at java.util.HashMap.<init>(HashMap.java:226)        at
> >>>>>>
> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
> >>>>>>        at java.util.TimerThread.mainLoop(Timer.java:534)        at
> >>>>>> java.util.TimerThread.run(Timer.java:484)
> >>>>>
> >>>>>
> >>>
> >>
> >
> >
>

Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Jonathan Ellis <jb...@gmail.com>.
Schubert, I don't know if you saw this in the other thread referencing
your slides:

It looks like the slowdown doesn't hit until after several GCs,
although it's hard to tell since the scale is different on the GC
graph and the insert throughput ones.

Perhaps this is compaction kicking in, not GCs?  Definitely the extra
I/O + CPU load from compaction will cause a drop in throughput.

On Mon, Apr 19, 2010 at 9:06 PM, Schubert Zhang <zs...@gmail.com> wrote:
> -Xmx1G is too small.
> In my cluster, 8GB ram on each node, and I grant 6GB to cassandra.
>
> Please see my test @ http://www.slideshare.net/schubertzhang/presentations
>
> –Memory, GC..., always to be the bottleneck and big issue of java-based
> infrastructure software!
>
> References:
> –http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts
> –https://issues.apache.org/jira/browse/CASSANDRA-896  (LinkedBlockingQueue
> issue, fixed in jdk-6u19)
>
> In fact, always when I using java-based infrastructure software, such as
> Cassandra, Hadoop, HBase, etc, I am also pained about such memory/GC issue
> finally.
>
> Then, we should provide higher harware with more RAM (such as 32GB~64GB),
> more CPU cores (such as 8~16). And we still cannot control the
> Out-Of-Memory-Error.
>
> I am thinking, maybe it is not right to leave the job of memory control to
> JVM.
>
> I have a long experience in telecom and embedded software in past ten years,
> where need robust programs and small RAM. I want to discuss following ideas
> with the community:
>
> 1. Manage the memory by ourselves: allocate objects/resource (memory) at
> initiating phase, and assign instances at runtime.
> 2. Reject the request when be short of resource, instead of throws OOME and
> exit (crash).
>
> 3. I know, it is not easy in java program.
>
> Schubert
>
> On Tue, Apr 20, 2010 at 9:40 AM, Ken Sandney <bl...@gmail.com> wrote:
>>
>> here is my JVM options, by default, I didn't modify them, from
>> cassandra.in.sh
>>>
>>> # Arguments to pass to the JVM
>>>
>>> JVM_OPTS=" \
>>>
>>>         -ea \
>>>
>>>         -Xms128M \
>>>
>>>         -Xmx1G \
>>>
>>>         -XX:TargetSurvivorRatio=90 \
>>>
>>>         -XX:+AggressiveOpts \
>>>
>>>         -XX:+UseParNewGC \
>>>
>>>         -XX:+UseConcMarkSweepGC \
>>>
>>>         -XX:+CMSParallelRemarkEnabled \
>>>
>>>         -XX:+HeapDumpOnOutOfMemoryError \
>>>
>>>         -XX:SurvivorRatio=128 \
>>>
>>>         -XX:MaxTenuringThreshold=0 \
>>>
>>>         -Dcom.sun.management.jmxremote.port=8080 \
>>>
>>>         -Dcom.sun.management.jmxremote.ssl=false \
>>>
>>>         -Dcom.sun.management.jmxremote.authenticate=false"
>>
>> and my box is normal pc with 2GB ram, Intel E3200  @ 2.40GHz. By the way,
>> I am using the latest Sun JDK
>> On Tue, Apr 20, 2010 at 9:33 AM, Schubert Zhang <zs...@gmail.com> wrote:
>>>
>>> Seems you should configure larger jvm-heap.
>>>
>>> On Tue, Apr 20, 2010 at 9:32 AM, Schubert Zhang <zs...@gmail.com>
>>> wrote:
>>>>
>>>> Please also post your jvm-heap and GC options, i.e. the seting in
>>>> cassandra.in.sh
>>>> And what about you node hardware?
>>>>
>>>> On Tue, Apr 20, 2010 at 9:22 AM, Ken Sandney <bl...@gmail.com>
>>>> wrote:
>>>>>
>>>>> Hi
>>>>> I am doing a insert test with 9 nodes, the command:
>>>>>>
>>>>>> stress.py -n 1000000000 -t 1000 -c 10 -o insert -i 5 -d
>>>>>> 10.0.0.1,10.0.0.2.....
>>>>>
>>>>> and  5 of the 9 nodes were cashed, only about 6'500'000 rows were
>>>>> inserted
>>>>> I checked out the system.log and seems the reason are 'out of memory'.
>>>>> I don't if this had something to do with my settings.
>>>>> Any idea about this?
>>>>> Thank you, and the following are the errors from system.log
>>>>>
>>>>>>
>>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 20:43:14,013
>>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>>>>>> Thread[CACHETABLE-TIMER-1,5,main]
>>>>>>
>>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>>>>>>
>>>>>>         at java.util.TimerThread.mainLoop(Timer.java:512)
>>>>>>
>>>>>>         at java.util.TimerThread.run(Timer.java:462)
>>>>>>
>>>>>> ERROR [ROW-MUTATION-STAGE:9] 2010-04-19 20:43:27,932
>>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>>>>>> Thread[ROW-MUTATION-STAGE:9,5,main]
>>>>>>
>>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:893)
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(ConcurrentSkipListMap.java:1893)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:192)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:118)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:108)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.RowMutationSerializer.defreezeTheMaps(RowMutation.java:359)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:369)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:322)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:45)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:40)
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>
>>>>>>         at java.lang.Thread.run(Thread.java:619)
>>>>>
>>>>> and another
>>>>>>
>>>>>>  INFO [GC inspection] 2010-04-19 21:13:09,034 GCInspector.java (line
>>>>>> 110) GC for ConcurrentMarkSweep: 2016 ms, 1239096 reclaimed leaving
>>>>>> 1094238944 used; max is 1211826176
>>>>>>
>>>>>> ERROR [Thread-14] 2010-04-19 21:23:18,508 CassandraDaemon.java (line
>>>>>> 78) Fatal exception in thread Thread[Thread-14,5,main]
>>>>>>
>>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>>
>>>>>>         at sun.nio.ch.Util.releaseTemporaryDirectBuffer(Util.java:67)
>>>>>>
>>>>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:212)
>>>>>>
>>>>>>         at
>>>>>> sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
>>>>>>
>>>>>>         at
>>>>>> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:176)
>>>>>>
>>>>>>         at
>>>>>> sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
>>>>>>
>>>>>>         at java.io.InputStream.read(InputStream.java:85)
>>>>>>
>>>>>>         at
>>>>>> sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:64)
>>>>>>
>>>>>>         at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:70)
>>>>>>
>>>>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:23:18,514
>>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>>>>>>
>>>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
>>>>>> Java heap space
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>>>>>>
>>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>>
>>>>>>         at java.lang.Thread.run(Thread.java:619)
>>>>>>
>>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>>>>>
>>>>>>  INFO [FLUSH-WRITER-POOL:1] 2010-04-19 21:23:25,600 Memtable.java
>>>>>> (line 162) Completed flushing
>>>>>> /m/cassandra/data/Keyspace1/Standard1-623-Data.db
>>>>>>
>>>>>> ERROR [Thread-13] 2010-04-19 21:23:18,514 CassandraDaemon.java (line
>>>>>> 78) Fatal exception in thread Thread[Thread-13,5,main]
>>>>>>
>>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>>
>>>>>> ERROR [Thread-15] 2010-04-19 21:23:18,514 CassandraDaemon.java (line
>>>>>> 78) Fatal exception in thread Thread[Thread-15,5,main]
>>>>>>
>>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>>
>>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:23:18,514
>>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>>>>>> Thread[CACHETABLE-TIMER-1,5,main]
>>>>>>
>>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>
>>>>>
>>>>> and
>>>>>>
>>>>>>  INFO 21:00:31,319 GC for ConcurrentMarkSweep: 1417 ms, 206216
>>>>>> reclaimed leaving 1094527752 used; max is 1211826176
>>>>>>
>>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>>
>>>>>> Dumping heap to java_pid28670.hprof ...
>>>>>>
>>>>>>  INFO 21:01:23,882 GC for ConcurrentMarkSweep: 2100 ms, 734008
>>>>>> reclaimed leaving 1093996648 used; max is 1211826176
>>>>>>
>>>>>> Heap dump file created [1095841554 bytes in 12.960 secs]
>>>>>>
>>>>>>  INFO 21:01:45,082 GC for ConcurrentMarkSweep: 2089 ms, 769968
>>>>>> reclaimed leaving 1093960776 used; max is 1211826176
>>>>>>
>>>>>> ERROR 21:01:49,559 Fatal exception in thread Thread[Hint
>>>>>> delivery,5,main]
>>>>>>
>>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>>
>>>>>>         at java.util.Arrays.copyOf(Arrays.java:2772)
>>>>>>
>>>>>>         at java.util.Arrays.copyOf(Arrays.java:2746)
>>>>>>
>>>>>>         at java.util.ArrayList.ensureCapacity(ArrayList.java:187)
>>>>>>
>>>>>>         at java.util.ArrayList.add(ArrayList.java:378)
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.ConcurrentSkipListMap.toList(ConcurrentSkipListMap.java:2341)
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.ConcurrentSkipListMap$Values.toArray(ConcurrentSkipListMap.java:2445)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.Memtable.getSliceIterator(Memtable.java:207)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.filter.SliceQueryFilter.getMemColumnIterator(SliceQueryFilter.java:58)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.filter.QueryFilter.getMemColumnIterator(QueryFilter.java:53)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:816)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:750)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:719)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:175)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>>>>>>
>>>>>>         at java.lang.Thread.run(Thread.java:636)
>>>>>>
>>>>>>  INFO 21:01:56,123 GC for ConcurrentMarkSweep: 2115 ms, 893240
>>>>>> reclaimed leaving 1093862712 used; max is 1211826176
>>>>>
>>>>> and
>>>>>>
>>>>>> ERROR [Hint delivery] 2010-04-19 21:57:07,089 CassandraDaemon.java
>>>>>> (line 78) Fatal exception in thread Thread[Hint delivery,5,main]
>>>>>>
>>>>>> java.lang.RuntimeException: java.lang.RuntimeException:
>>>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>>>>>> heap space
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>>>>>>
>>>>>>         at java.lang.Thread.run(Thread.java:636)
>>>>>>
>>>>>> Caused by: java.lang.RuntimeException:
>>>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>>>>>> heap space
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:209)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>>>>>>
>>>>>>         ... 1 more
>>>>>>
>>>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>>>>>
>>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:205)
>>>>>>
>>>>>>         ... 4 more
>>>>>>
>>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>>>>>
>>>>>> ERROR [MESSAGE-DESERIALIZER-POOL:1] 2010-04-19 21:57:07,089
>>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>>>>>>
>>>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
>>>>>> Java heap space
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>>>>>
>>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>>>>
>>>>>>         at java.lang.Thread.run(Thread.java:636)
>>>>>>
>>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>>>>>
>>>>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:57:07,089
>>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>>>>>>
>>>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
>>>>>> Java heap space
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>>>>>
>>>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>>>>>>
>>>>>>         at
>>>>>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>>>>>>
>>>>>>         at
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>>>>
>>>>>>         at java.lang.Thread.run(Thread.java:636)
>>>>>>
>>>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>>>>>
>>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:56:29,572
>>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>>>>>> Thread[CACHETABLE-TIMER-1,5,main]
>>>>>>
>>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>>
>>>>>>         at java.util.HashMap.<init>(HashMap.java:226)        at
>>>>>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>>>>>>        at java.util.TimerThread.mainLoop(Timer.java:534)        at
>>>>>> java.util.TimerThread.run(Timer.java:484)
>>>>>
>>>>>
>>>
>>
>
>

Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Brandon Williams <dr...@gmail.com>.
On Fri, Apr 23, 2010 at 4:59 AM, richard yao <ri...@gmail.com>wrote:

> I got the same question, and after that cassandra cann't be started.
> I want to know how to restart the cassandra after it crashed.
> Thanks for any reply.
>

Perhaps supply the error when you restart it?

-Brandon

Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by richard yao <ri...@gmail.com>.
I got the same question, and after that cassandra cann't be started.
I want to know how to restart the cassandra after it crashed.
Thanks for any reply.

RE: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Mark Jones <MJ...@imagehawk.com>.
I would think this is on the roadmap, just not available yet.  It can be managed by adjusting the Heap size (to a large degree).

-----Original Message-----
From: Tatu Saloranta [mailto:tsaloranta@gmail.com]
Sent: Tuesday, April 20, 2010 12:18 PM
To: user@cassandra.apache.org
Subject: Re: 0.6.1 insert 1B rows, crashed when using py_stress

On Mon, Apr 19, 2010 at 7:12 PM, Brandon Williams <dr...@gmail.com> wrote:
> On Mon, Apr 19, 2010 at 9:06 PM, Schubert Zhang <zs...@gmail.com> wrote:
>>
>> 2. Reject the request when be short of resource, instead of throws OOME
>> and exit (crash).
>
> Right, that is the crux of the problem  It will be addressed here:
> https://issues.apache.org/jira/browse/CASSANDRA-685

I think it would be great to get such "graceful degradation"
implemented: first thing any service should do is to protect itself
against meltdown.
Clients are better served by getting 50x responses (or rather its
equivalent for thrift), to indicate transient overload, than get
system into GC death spiral, where request time out but still consume
significant amounts of resources. Especially since returning error
response is usually rather cheap compared to doing full processing.
Also it should be then easy to hook up failure information via JMX to
expose it and allow alarming.

But this is of course more difficult with distributed set up,
especially since different QoS for different request would help (for
example: communication between nodes & other things related to
"accepted" requests should have higher priority than new incoming
requests).

-+ Tatu +-

Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Tatu Saloranta <ts...@gmail.com>.
On Mon, Apr 19, 2010 at 7:12 PM, Brandon Williams <dr...@gmail.com> wrote:
> On Mon, Apr 19, 2010 at 9:06 PM, Schubert Zhang <zs...@gmail.com> wrote:
>>
>> 2. Reject the request when be short of resource, instead of throws OOME
>> and exit (crash).
>
> Right, that is the crux of the problem  It will be addressed here:
> https://issues.apache.org/jira/browse/CASSANDRA-685

I think it would be great to get such "graceful degradation"
implemented: first thing any service should do is to protect itself
against meltdown.
Clients are better served by getting 50x responses (or rather its
equivalent for thrift), to indicate transient overload, than get
system into GC death spiral, where request time out but still consume
significant amounts of resources. Especially since returning error
response is usually rather cheap compared to doing full processing.
Also it should be then easy to hook up failure information via JMX to
expose it and allow alarming.

But this is of course more difficult with distributed set up,
especially since different QoS for different request would help (for
example: communication between nodes & other things related to
"accepted" requests should have higher priority than new incoming
requests).

-+ Tatu +-

Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Brandon Williams <dr...@gmail.com>.
On Mon, Apr 19, 2010 at 9:06 PM, Schubert Zhang <zs...@gmail.com> wrote:
>
> 2. Reject the request when be short of resource, instead of throws OOME and
> exit (crash).
>

Right, that is the crux of the problem  It will be addressed here:
https://issues.apache.org/jira/browse/CASSANDRA-685

-Brandon

Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Schubert Zhang <zs...@gmail.com>.
-Xmx1G is too small.
In my cluster, 8GB ram on each node, and I grant 6GB to cassandra.

Please see my test @ http://www.slideshare.net/schubertzhang/presentations

幻灯片 5
–Memory, GC..., always to be the bottleneck and big issue of java-based
infrastructure software!

References:
–http://wiki.apache.org/cassandra/FAQ#slows_down_after_l<http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts>
otso_inserts<http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts>
–https://issues.apache.org/jira/browse/CASSANDRA-896  (LinkedBlockingQueue
issue, fixed in jdk-6u19)


In fact, always when I using java-based infrastructure software, such as
Cassandra, Hadoop, HBase, etc, I am also pained about such memory/GC issue
finally.

Then, we should provide higher harware with more RAM (such as 32GB~64GB),
more CPU cores (such as 8~16). And we still cannot control the
Out-Of-Memory-Error.

I am thinking, maybe it is not right to leave the job of memory control to
JVM.

I have a long experience in telecom and embedded software in past ten years,
where need robust programs and small RAM. I want to discuss following ideas
with the community:

1. Manage the memory by ourselves: allocate objects/resource (memory) at
initiating phase, and assign instances at runtime.
2. Reject the request when be short of resource, instead of throws OOME and
exit (crash).

3. I know, it is not easy in java program.

Schubert

On Tue, Apr 20, 2010 at 9:40 AM, Ken Sandney <bl...@gmail.com> wrote:

> here is my JVM options, by default, I didn't modify them, from
> cassandra.in.sh
>
> # Arguments to pass to the JVM
>
> JVM_OPTS=" \
>
>         -ea \
>
>         -Xms128M \
>
>         -Xmx1G \
>
>         -XX:TargetSurvivorRatio=90 \
>
>         -XX:+AggressiveOpts \
>
>         -XX:+UseParNewGC \
>
>         -XX:+UseConcMarkSweepGC \
>
>         -XX:+CMSParallelRemarkEnabled \
>
>         -XX:+HeapDumpOnOutOfMemoryError \
>
>         -XX:SurvivorRatio=128 \
>
>         -XX:MaxTenuringThreshold=0 \
>
>         -Dcom.sun.management.jmxremote.port=8080 \
>
>         -Dcom.sun.management.jmxremote.ssl=false \
>
>         -Dcom.sun.management.jmxremote.authenticate=false"
>
>
> and my box is normal pc with 2GB ram, Intel E3200  @ 2.40GHz. By the way, I
> am using the latest Sun JDK
>
> On Tue, Apr 20, 2010 at 9:33 AM, Schubert Zhang <zs...@gmail.com> wrote:
>
>> Seems you should configure larger jvm-heap.
>>
>>
>> On Tue, Apr 20, 2010 at 9:32 AM, Schubert Zhang <zs...@gmail.com>wrote:
>>
>>> Please also post your jvm-heap and GC options, i.e. the seting in
>>> cassandra.in.sh
>>> And what about you node hardware?
>>>
>>> On Tue, Apr 20, 2010 at 9:22 AM, Ken Sandney <bl...@gmail.com>wrote:
>>>
>>>> Hi
>>>> I am doing a insert test with 9 nodes, the command:
>>>>
>>>>> stress.py -n 1000000000 -t 1000 -c 10 -o insert -i 5 -d
>>>>> 10.0.0.1,10.0.0.2.....
>>>>
>>>> and  5 of the 9 nodes were cashed, only about 6'500'000 rows were
>>>> inserted
>>>> I checked out the system.log and seems the reason are 'out of memory'. I
>>>> don't if this had something to do with my settings.
>>>> Any idea about this?
>>>> Thank you, and the following are the errors from system.log
>>>>
>>>>
>>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 20:43:14,013 CassandraDaemon.java
>>>>> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]
>>>>
>>>> java.lang.OutOfMemoryError: Java heap space
>>>>
>>>>         at
>>>>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>>>>
>>>>         at java.util.TimerThread.mainLoop(Timer.java:512)
>>>>
>>>>         at java.util.TimerThread.run(Timer.java:462)
>>>>
>>>> ERROR [ROW-MUTATION-STAGE:9] 2010-04-19 20:43:27,932
>>>>> CassandraDaemon.java (line 78) Fatal exception in thread
>>>>> Thread[ROW-MUTATION-STAGE:9,5,main]
>>>>
>>>> java.lang.OutOfMemoryError: Java heap space
>>>>
>>>>         at
>>>>> java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:893)
>>>>
>>>>         at
>>>>> java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(ConcurrentSkipListMap.java:1893)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:192)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:118)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:108)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.RowMutationSerializer.defreezeTheMaps(RowMutation.java:359)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:369)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:322)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:45)
>>>>
>>>>         at
>>>>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:40)
>>>>
>>>>         at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>
>>>>         at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>
>>>>         at java.lang.Thread.run(Thread.java:619)
>>>>
>>>>
>>>> and another
>>>>
>>>>  INFO [GC inspection] 2010-04-19 21:13:09,034 GCInspector.java (line
>>>>> 110) GC for ConcurrentMarkSweep: 2016 ms, 1239096 reclaimed leaving
>>>>> 1094238944 used; max is 1211826176
>>>>
>>>> ERROR [Thread-14] 2010-04-19 21:23:18,508 CassandraDaemon.java (line 78)
>>>>> Fatal exception in thread Thread[Thread-14,5,main]
>>>>
>>>> java.lang.OutOfMemoryError: Java heap space
>>>>
>>>>         at sun.nio.ch.Util.releaseTemporaryDirectBuffer(Util.java:67)
>>>>
>>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:212)
>>>>
>>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
>>>>
>>>>         at
>>>>> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:176)
>>>>
>>>>         at
>>>>> sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
>>>>
>>>>         at java.io.InputStream.read(InputStream.java:85)
>>>>
>>>>         at
>>>>> sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:64)
>>>>
>>>>         at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>>>
>>>>         at
>>>>> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:70)
>>>>
>>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:23:18,514
>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>>>>
>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
>>>>> Java heap space
>>>>
>>>>         at
>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>>>>
>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>
>>>>         at
>>>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>>>>
>>>>         at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
>>>>
>>>>         at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>
>>>>         at java.lang.Thread.run(Thread.java:619)
>>>>
>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>>>
>>>>  INFO [FLUSH-WRITER-POOL:1] 2010-04-19 21:23:25,600 Memtable.java (line
>>>>> 162) Completed flushing /m/cassandra/data/Keyspace1/Standard1-623-Data.db
>>>>
>>>> ERROR [Thread-13] 2010-04-19 21:23:18,514 CassandraDaemon.java (line 78)
>>>>> Fatal exception in thread Thread[Thread-13,5,main]
>>>>
>>>> java.lang.OutOfMemoryError: Java heap space
>>>>
>>>> ERROR [Thread-15] 2010-04-19 21:23:18,514 CassandraDaemon.java (line 78)
>>>>> Fatal exception in thread Thread[Thread-15,5,main]
>>>>
>>>> java.lang.OutOfMemoryError: Java heap space
>>>>
>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:23:18,514 CassandraDaemon.java
>>>>> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]
>>>>
>>>> java.lang.OutOfMemoryError: Java heap space
>>>>
>>>>
>>>> and
>>>>
>>>>  INFO 21:00:31,319 GC for ConcurrentMarkSweep: 1417 ms, 206216 reclaimed
>>>>> leaving 1094527752 used; max is 1211826176
>>>>
>>>> java.lang.OutOfMemoryError: Java heap space
>>>>
>>>> Dumping heap to java_pid28670.hprof ...
>>>>
>>>>  INFO 21:01:23,882 GC for ConcurrentMarkSweep: 2100 ms, 734008 reclaimed
>>>>> leaving 1093996648 used; max is 1211826176
>>>>
>>>> Heap dump file created [1095841554 bytes in 12.960 secs]
>>>>
>>>>  INFO 21:01:45,082 GC for ConcurrentMarkSweep: 2089 ms, 769968 reclaimed
>>>>> leaving 1093960776 used; max is 1211826176
>>>>
>>>> ERROR 21:01:49,559 Fatal exception in thread Thread[Hint
>>>>> delivery,5,main]
>>>>
>>>> java.lang.OutOfMemoryError: Java heap space
>>>>
>>>>         at java.util.Arrays.copyOf(Arrays.java:2772)
>>>>
>>>>         at java.util.Arrays.copyOf(Arrays.java:2746)
>>>>
>>>>         at java.util.ArrayList.ensureCapacity(ArrayList.java:187)
>>>>
>>>>         at java.util.ArrayList.add(ArrayList.java:378)
>>>>
>>>>         at
>>>>> java.util.concurrent.ConcurrentSkipListMap.toList(ConcurrentSkipListMap.java:2341)
>>>>
>>>>         at
>>>>> java.util.concurrent.ConcurrentSkipListMap$Values.toArray(ConcurrentSkipListMap.java:2445)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.Memtable.getSliceIterator(Memtable.java:207)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.filter.SliceQueryFilter.getMemColumnIterator(SliceQueryFilter.java:58)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.filter.QueryFilter.getMemColumnIterator(QueryFilter.java:53)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:816)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:750)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:719)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:175)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>>>>
>>>>         at
>>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>>>>
>>>>         at java.lang.Thread.run(Thread.java:636)
>>>>
>>>>  INFO 21:01:56,123 GC for ConcurrentMarkSweep: 2115 ms, 893240 reclaimed
>>>>> leaving 1093862712 used; max is 1211826176
>>>>
>>>>
>>>> and
>>>>
>>>> ERROR [Hint delivery] 2010-04-19 21:57:07,089 CassandraDaemon.java (line
>>>>> 78) Fatal exception in thread Thread[Hint delivery,5,main]
>>>>
>>>> java.lang.RuntimeException: java.lang.RuntimeException:
>>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>>>>> heap space
>>>>
>>>>         at
>>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>>>>
>>>>         at java.lang.Thread.run(Thread.java:636)
>>>>
>>>> Caused by: java.lang.RuntimeException:
>>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>>>>> heap space
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:209)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>>>>
>>>>         at
>>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>>>>
>>>>         ... 1 more
>>>>
>>>> Caused by: java.util.concurrent.ExecutionException:
>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>
>>>>         at
>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>>>
>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:205)
>>>>
>>>>         ... 4 more
>>>>
>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>>>
>>>> ERROR [MESSAGE-DESERIALIZER-POOL:1] 2010-04-19 21:57:07,089
>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>>>>
>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
>>>>> Java heap space
>>>>
>>>>         at
>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>>>
>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>>>
>>>>         at
>>>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>>>>
>>>>         at
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>>>>
>>>>         at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>>
>>>>         at java.lang.Thread.run(Thread.java:636)
>>>>
>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>>>
>>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:57:07,089
>>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>>>>
>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
>>>>> Java heap space
>>>>
>>>>         at
>>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>>>
>>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>>>
>>>>         at
>>>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>>>>
>>>>         at
>>>>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>>>>
>>>>         at
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>>>>
>>>>         at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>>
>>>>         at java.lang.Thread.run(Thread.java:636)
>>>>
>>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>>>
>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:56:29,572 CassandraDaemon.java
>>>>> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]
>>>>
>>>> java.lang.OutOfMemoryError: Java heap space
>>>>
>>>>         at java.util.HashMap.<init>(HashMap.java:226)        at
>>>>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>>>>>        at java.util.TimerThread.mainLoop(Timer.java:534)        at
>>>>> java.util.TimerThread.run(Timer.java:484)
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>

Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Ken Sandney <bl...@gmail.com>.
here is my JVM options, by default, I didn't modify them, from
cassandra.in.sh

# Arguments to pass to the JVM

JVM_OPTS=" \

        -ea \

        -Xms128M \

        -Xmx1G \

        -XX:TargetSurvivorRatio=90 \

        -XX:+AggressiveOpts \

        -XX:+UseParNewGC \

        -XX:+UseConcMarkSweepGC \

        -XX:+CMSParallelRemarkEnabled \

        -XX:+HeapDumpOnOutOfMemoryError \

        -XX:SurvivorRatio=128 \

        -XX:MaxTenuringThreshold=0 \

        -Dcom.sun.management.jmxremote.port=8080 \

        -Dcom.sun.management.jmxremote.ssl=false \

        -Dcom.sun.management.jmxremote.authenticate=false"


and my box is normal pc with 2GB ram, Intel E3200  @ 2.40GHz. By the way, I
am using the latest Sun JDK

On Tue, Apr 20, 2010 at 9:33 AM, Schubert Zhang <zs...@gmail.com> wrote:

> Seems you should configure larger jvm-heap.
>
>
> On Tue, Apr 20, 2010 at 9:32 AM, Schubert Zhang <zs...@gmail.com> wrote:
>
>> Please also post your jvm-heap and GC options, i.e. the seting in
>> cassandra.in.sh
>> And what about you node hardware?
>>
>> On Tue, Apr 20, 2010 at 9:22 AM, Ken Sandney <bl...@gmail.com> wrote:
>>
>>> Hi
>>> I am doing a insert test with 9 nodes, the command:
>>>
>>>> stress.py -n 1000000000 -t 1000 -c 10 -o insert -i 5 -d
>>>> 10.0.0.1,10.0.0.2.....
>>>
>>> and  5 of the 9 nodes were cashed, only about 6'500'000 rows were
>>> inserted
>>> I checked out the system.log and seems the reason are 'out of memory'. I
>>> don't if this had something to do with my settings.
>>> Any idea about this?
>>> Thank you, and the following are the errors from system.log
>>>
>>>
>>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 20:43:14,013 CassandraDaemon.java
>>>> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]
>>>
>>> java.lang.OutOfMemoryError: Java heap space
>>>
>>>         at
>>>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>>>
>>>         at java.util.TimerThread.mainLoop(Timer.java:512)
>>>
>>>         at java.util.TimerThread.run(Timer.java:462)
>>>
>>> ERROR [ROW-MUTATION-STAGE:9] 2010-04-19 20:43:27,932 CassandraDaemon.java
>>>> (line 78) Fatal exception in thread Thread[ROW-MUTATION-STAGE:9,5,main]
>>>
>>> java.lang.OutOfMemoryError: Java heap space
>>>
>>>         at
>>>> java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:893)
>>>
>>>         at
>>>> java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(ConcurrentSkipListMap.java:1893)
>>>
>>>         at
>>>> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:192)
>>>
>>>         at
>>>> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:118)
>>>
>>>         at
>>>> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:108)
>>>
>>>         at
>>>> org.apache.cassandra.db.RowMutationSerializer.defreezeTheMaps(RowMutation.java:359)
>>>
>>>         at
>>>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:369)
>>>
>>>         at
>>>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:322)
>>>
>>>         at
>>>> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:45)
>>>
>>>         at
>>>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:40)
>>>
>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>
>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>
>>>         at java.lang.Thread.run(Thread.java:619)
>>>
>>>
>>> and another
>>>
>>>  INFO [GC inspection] 2010-04-19 21:13:09,034 GCInspector.java (line 110)
>>>> GC for ConcurrentMarkSweep: 2016 ms, 1239096 reclaimed leaving 1094238944
>>>> used; max is 1211826176
>>>
>>> ERROR [Thread-14] 2010-04-19 21:23:18,508 CassandraDaemon.java (line 78)
>>>> Fatal exception in thread Thread[Thread-14,5,main]
>>>
>>> java.lang.OutOfMemoryError: Java heap space
>>>
>>>         at sun.nio.ch.Util.releaseTemporaryDirectBuffer(Util.java:67)
>>>
>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:212)
>>>
>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
>>>
>>>         at
>>>> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:176)
>>>
>>>         at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
>>>
>>>         at java.io.InputStream.read(InputStream.java:85)
>>>
>>>         at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:64)
>>>
>>>         at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>>
>>>         at
>>>> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:70)
>>>
>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:23:18,514
>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>>>
>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>>>> heap space
>>>
>>>         at
>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>>>
>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>
>>>         at
>>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>>>
>>>         at
>>>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>>>
>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
>>>
>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>
>>>         at java.lang.Thread.run(Thread.java:619)
>>>
>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>>
>>>  INFO [FLUSH-WRITER-POOL:1] 2010-04-19 21:23:25,600 Memtable.java (line
>>>> 162) Completed flushing /m/cassandra/data/Keyspace1/Standard1-623-Data.db
>>>
>>> ERROR [Thread-13] 2010-04-19 21:23:18,514 CassandraDaemon.java (line 78)
>>>> Fatal exception in thread Thread[Thread-13,5,main]
>>>
>>> java.lang.OutOfMemoryError: Java heap space
>>>
>>> ERROR [Thread-15] 2010-04-19 21:23:18,514 CassandraDaemon.java (line 78)
>>>> Fatal exception in thread Thread[Thread-15,5,main]
>>>
>>> java.lang.OutOfMemoryError: Java heap space
>>>
>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:23:18,514 CassandraDaemon.java
>>>> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]
>>>
>>> java.lang.OutOfMemoryError: Java heap space
>>>
>>>
>>> and
>>>
>>>  INFO 21:00:31,319 GC for ConcurrentMarkSweep: 1417 ms, 206216 reclaimed
>>>> leaving 1094527752 used; max is 1211826176
>>>
>>> java.lang.OutOfMemoryError: Java heap space
>>>
>>> Dumping heap to java_pid28670.hprof ...
>>>
>>>  INFO 21:01:23,882 GC for ConcurrentMarkSweep: 2100 ms, 734008 reclaimed
>>>> leaving 1093996648 used; max is 1211826176
>>>
>>> Heap dump file created [1095841554 bytes in 12.960 secs]
>>>
>>>  INFO 21:01:45,082 GC for ConcurrentMarkSweep: 2089 ms, 769968 reclaimed
>>>> leaving 1093960776 used; max is 1211826176
>>>
>>> ERROR 21:01:49,559 Fatal exception in thread Thread[Hint delivery,5,main]
>>>
>>> java.lang.OutOfMemoryError: Java heap space
>>>
>>>         at java.util.Arrays.copyOf(Arrays.java:2772)
>>>
>>>         at java.util.Arrays.copyOf(Arrays.java:2746)
>>>
>>>         at java.util.ArrayList.ensureCapacity(ArrayList.java:187)
>>>
>>>         at java.util.ArrayList.add(ArrayList.java:378)
>>>
>>>         at
>>>> java.util.concurrent.ConcurrentSkipListMap.toList(ConcurrentSkipListMap.java:2341)
>>>
>>>         at
>>>> java.util.concurrent.ConcurrentSkipListMap$Values.toArray(ConcurrentSkipListMap.java:2445)
>>>
>>>         at
>>>> org.apache.cassandra.db.Memtable.getSliceIterator(Memtable.java:207)
>>>
>>>         at
>>>> org.apache.cassandra.db.filter.SliceQueryFilter.getMemColumnIterator(SliceQueryFilter.java:58)
>>>
>>>         at
>>>> org.apache.cassandra.db.filter.QueryFilter.getMemColumnIterator(QueryFilter.java:53)
>>>
>>>         at
>>>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:816)
>>>
>>>         at
>>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:750)
>>>
>>>         at
>>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:719)
>>>
>>>         at
>>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:175)
>>>
>>>         at
>>>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>>>
>>>         at
>>>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>>>
>>>         at
>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>>>
>>>         at java.lang.Thread.run(Thread.java:636)
>>>
>>>  INFO 21:01:56,123 GC for ConcurrentMarkSweep: 2115 ms, 893240 reclaimed
>>>> leaving 1093862712 used; max is 1211826176
>>>
>>>
>>> and
>>>
>>> ERROR [Hint delivery] 2010-04-19 21:57:07,089 CassandraDaemon.java (line
>>>> 78) Fatal exception in thread Thread[Hint delivery,5,main]
>>>
>>> java.lang.RuntimeException: java.lang.RuntimeException:
>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>>>> heap space
>>>
>>>         at
>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>>>
>>>         at java.lang.Thread.run(Thread.java:636)
>>>
>>> Caused by: java.lang.RuntimeException:
>>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>>>> heap space
>>>
>>>         at
>>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:209)
>>>
>>>         at
>>>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>>>
>>>         at
>>>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>>>
>>>         at
>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>>>
>>>         ... 1 more
>>>
>>> Caused by: java.util.concurrent.ExecutionException:
>>>> java.lang.OutOfMemoryError: Java heap space
>>>
>>>         at
>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>>
>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>>
>>>         at
>>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:205)
>>>
>>>         ... 4 more
>>>
>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>>
>>> ERROR [MESSAGE-DESERIALIZER-POOL:1] 2010-04-19 21:57:07,089
>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>>>
>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>>>> heap space
>>>
>>>         at
>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>>
>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>>
>>>         at
>>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>>>
>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>>>
>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>
>>>         at java.lang.Thread.run(Thread.java:636)
>>>
>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>>
>>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:57:07,089
>>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>>>
>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>>>> heap space
>>>
>>>         at
>>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>>
>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>>
>>>         at
>>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>>>
>>>         at
>>>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>>>
>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>>>
>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>
>>>         at java.lang.Thread.run(Thread.java:636)
>>>
>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>>
>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:56:29,572 CassandraDaemon.java
>>>> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]
>>>
>>> java.lang.OutOfMemoryError: Java heap space
>>>
>>>         at java.util.HashMap.<init>(HashMap.java:226)        at
>>>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>>>>        at java.util.TimerThread.mainLoop(Timer.java:534)        at
>>>> java.util.TimerThread.run(Timer.java:484)
>>>
>>>
>>>
>>>
>>
>>
>

Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Schubert Zhang <zs...@gmail.com>.
Seems you should configure larger jvm-heap.

On Tue, Apr 20, 2010 at 9:32 AM, Schubert Zhang <zs...@gmail.com> wrote:

> Please also post your jvm-heap and GC options, i.e. the seting in
> cassandra.in.sh
> And what about you node hardware?
>
> On Tue, Apr 20, 2010 at 9:22 AM, Ken Sandney <bl...@gmail.com> wrote:
>
>> Hi
>> I am doing a insert test with 9 nodes, the command:
>>
>>> stress.py -n 1000000000 -t 1000 -c 10 -o insert -i 5 -d
>>> 10.0.0.1,10.0.0.2.....
>>
>> and  5 of the 9 nodes were cashed, only about 6'500'000 rows were inserted
>> I checked out the system.log and seems the reason are 'out of memory'. I
>> don't if this had something to do with my settings.
>> Any idea about this?
>> Thank you, and the following are the errors from system.log
>>
>>
>>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 20:43:14,013 CassandraDaemon.java
>>> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>>         at
>>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>>
>>         at java.util.TimerThread.mainLoop(Timer.java:512)
>>
>>         at java.util.TimerThread.run(Timer.java:462)
>>
>> ERROR [ROW-MUTATION-STAGE:9] 2010-04-19 20:43:27,932 CassandraDaemon.java
>>> (line 78) Fatal exception in thread Thread[ROW-MUTATION-STAGE:9,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>>         at
>>> java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:893)
>>
>>         at
>>> java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(ConcurrentSkipListMap.java:1893)
>>
>>         at
>>> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:192)
>>
>>         at
>>> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:118)
>>
>>         at
>>> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:108)
>>
>>         at
>>> org.apache.cassandra.db.RowMutationSerializer.defreezeTheMaps(RowMutation.java:359)
>>
>>         at
>>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:369)
>>
>>         at
>>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:322)
>>
>>         at
>>> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:45)
>>
>>         at
>>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:40)
>>
>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>
>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>
>>         at java.lang.Thread.run(Thread.java:619)
>>
>>
>> and another
>>
>>  INFO [GC inspection] 2010-04-19 21:13:09,034 GCInspector.java (line 110)
>>> GC for ConcurrentMarkSweep: 2016 ms, 1239096 reclaimed leaving 1094238944
>>> used; max is 1211826176
>>
>> ERROR [Thread-14] 2010-04-19 21:23:18,508 CassandraDaemon.java (line 78)
>>> Fatal exception in thread Thread[Thread-14,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>>         at sun.nio.ch.Util.releaseTemporaryDirectBuffer(Util.java:67)
>>
>>         at sun.nio.ch.IOUtil.read(IOUtil.java:212)
>>
>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
>>
>>         at
>>> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:176)
>>
>>         at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
>>
>>         at java.io.InputStream.read(InputStream.java:85)
>>
>>         at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:64)
>>
>>         at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>
>>         at
>>> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:70)
>>
>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:23:18,514
>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>>
>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>>> heap space
>>
>>         at
>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>>
>>         at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>
>>         at
>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>>
>>         at
>>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>>
>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
>>
>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>
>>         at java.lang.Thread.run(Thread.java:619)
>>
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>
>>  INFO [FLUSH-WRITER-POOL:1] 2010-04-19 21:23:25,600 Memtable.java (line
>>> 162) Completed flushing /m/cassandra/data/Keyspace1/Standard1-623-Data.db
>>
>> ERROR [Thread-13] 2010-04-19 21:23:18,514 CassandraDaemon.java (line 78)
>>> Fatal exception in thread Thread[Thread-13,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>> ERROR [Thread-15] 2010-04-19 21:23:18,514 CassandraDaemon.java (line 78)
>>> Fatal exception in thread Thread[Thread-15,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:23:18,514 CassandraDaemon.java
>>> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>>
>> and
>>
>>  INFO 21:00:31,319 GC for ConcurrentMarkSweep: 1417 ms, 206216 reclaimed
>>> leaving 1094527752 used; max is 1211826176
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>> Dumping heap to java_pid28670.hprof ...
>>
>>  INFO 21:01:23,882 GC for ConcurrentMarkSweep: 2100 ms, 734008 reclaimed
>>> leaving 1093996648 used; max is 1211826176
>>
>> Heap dump file created [1095841554 bytes in 12.960 secs]
>>
>>  INFO 21:01:45,082 GC for ConcurrentMarkSweep: 2089 ms, 769968 reclaimed
>>> leaving 1093960776 used; max is 1211826176
>>
>> ERROR 21:01:49,559 Fatal exception in thread Thread[Hint delivery,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>>         at java.util.Arrays.copyOf(Arrays.java:2772)
>>
>>         at java.util.Arrays.copyOf(Arrays.java:2746)
>>
>>         at java.util.ArrayList.ensureCapacity(ArrayList.java:187)
>>
>>         at java.util.ArrayList.add(ArrayList.java:378)
>>
>>         at
>>> java.util.concurrent.ConcurrentSkipListMap.toList(ConcurrentSkipListMap.java:2341)
>>
>>         at
>>> java.util.concurrent.ConcurrentSkipListMap$Values.toArray(ConcurrentSkipListMap.java:2445)
>>
>>         at
>>> org.apache.cassandra.db.Memtable.getSliceIterator(Memtable.java:207)
>>
>>         at
>>> org.apache.cassandra.db.filter.SliceQueryFilter.getMemColumnIterator(SliceQueryFilter.java:58)
>>
>>         at
>>> org.apache.cassandra.db.filter.QueryFilter.getMemColumnIterator(QueryFilter.java:53)
>>
>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:816)
>>
>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:750)
>>
>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:719)
>>
>>         at
>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:175)
>>
>>         at
>>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>>
>>         at
>>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>>
>>         at
>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>>
>>         at java.lang.Thread.run(Thread.java:636)
>>
>>  INFO 21:01:56,123 GC for ConcurrentMarkSweep: 2115 ms, 893240 reclaimed
>>> leaving 1093862712 used; max is 1211826176
>>
>>
>> and
>>
>> ERROR [Hint delivery] 2010-04-19 21:57:07,089 CassandraDaemon.java (line
>>> 78) Fatal exception in thread Thread[Hint delivery,5,main]
>>
>> java.lang.RuntimeException: java.lang.RuntimeException:
>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>>> heap space
>>
>>         at
>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>>
>>         at java.lang.Thread.run(Thread.java:636)
>>
>> Caused by: java.lang.RuntimeException:
>>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>>> heap space
>>
>>         at
>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:209)
>>
>>         at
>>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>>
>>         at
>>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>>
>>         at
>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>>
>>         ... 1 more
>>
>> Caused by: java.util.concurrent.ExecutionException:
>>> java.lang.OutOfMemoryError: Java heap space
>>
>>         at
>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>
>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>
>>         at
>>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:205)
>>
>>         ... 4 more
>>
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>
>> ERROR [MESSAGE-DESERIALIZER-POOL:1] 2010-04-19 21:57:07,089
>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>>
>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>>> heap space
>>
>>         at
>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>
>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>
>>         at
>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>>
>>         at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>>
>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>
>>         at java.lang.Thread.run(Thread.java:636)
>>
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>
>> ERROR [COMPACTION-POOL:1] 2010-04-19 21:57:07,089
>>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>>
>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>>> heap space
>>
>>         at
>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>
>>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>
>>         at
>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>>
>>         at
>>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>>
>>         at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>>
>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>
>>         at java.lang.Thread.run(Thread.java:636)
>>
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>
>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:56:29,572 CassandraDaemon.java
>>> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]
>>
>> java.lang.OutOfMemoryError: Java heap space
>>
>>         at java.util.HashMap.<init>(HashMap.java:226)        at
>>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>>>        at java.util.TimerThread.mainLoop(Timer.java:534)        at
>>> java.util.TimerThread.run(Timer.java:484)
>>
>>
>>
>>
>
>

Re: 0.6.1 insert 1B rows, crashed when using py_stress

Posted by Schubert Zhang <zs...@gmail.com>.
Please also post your jvm-heap and GC options, i.e. the seting in
cassandra.in.sh
And what about you node hardware?

On Tue, Apr 20, 2010 at 9:22 AM, Ken Sandney <bl...@gmail.com> wrote:

> Hi
> I am doing a insert test with 9 nodes, the command:
>
>> stress.py -n 1000000000 -t 1000 -c 10 -o insert -i 5 -d
>> 10.0.0.1,10.0.0.2.....
>
> and  5 of the 9 nodes were cashed, only about 6'500'000 rows were inserted
> I checked out the system.log and seems the reason are 'out of memory'. I
> don't if this had something to do with my settings.
> Any idea about this?
> Thank you, and the following are the errors from system.log
>
>
>> ERROR [CACHETABLE-TIMER-1] 2010-04-19 20:43:14,013 CassandraDaemon.java
>> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]
>
> java.lang.OutOfMemoryError: Java heap space
>
>         at
>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>
>         at java.util.TimerThread.mainLoop(Timer.java:512)
>
>         at java.util.TimerThread.run(Timer.java:462)
>
> ERROR [ROW-MUTATION-STAGE:9] 2010-04-19 20:43:27,932 CassandraDaemon.java
>> (line 78) Fatal exception in thread Thread[ROW-MUTATION-STAGE:9,5,main]
>
> java.lang.OutOfMemoryError: Java heap space
>
>         at
>> java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:893)
>
>         at
>> java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(ConcurrentSkipListMap.java:1893)
>
>         at
>> org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:192)
>
>         at
>> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:118)
>
>         at
>> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:108)
>
>         at
>> org.apache.cassandra.db.RowMutationSerializer.defreezeTheMaps(RowMutation.java:359)
>
>         at
>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:369)
>
>         at
>> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:322)
>
>         at
>> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:45)
>
>         at
>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:40)
>
>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>
>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>
>         at java.lang.Thread.run(Thread.java:619)
>
>
> and another
>
>  INFO [GC inspection] 2010-04-19 21:13:09,034 GCInspector.java (line 110)
>> GC for ConcurrentMarkSweep: 2016 ms, 1239096 reclaimed leaving 1094238944
>> used; max is 1211826176
>
> ERROR [Thread-14] 2010-04-19 21:23:18,508 CassandraDaemon.java (line 78)
>> Fatal exception in thread Thread[Thread-14,5,main]
>
> java.lang.OutOfMemoryError: Java heap space
>
>         at sun.nio.ch.Util.releaseTemporaryDirectBuffer(Util.java:67)
>
>         at sun.nio.ch.IOUtil.read(IOUtil.java:212)
>
>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
>
>         at
>> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:176)
>
>         at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
>
>         at java.io.InputStream.read(InputStream.java:85)
>
>         at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:64)
>
>         at java.io.DataInputStream.readInt(DataInputStream.java:370)
>
>         at
>> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:70)
>
> ERROR [COMPACTION-POOL:1] 2010-04-19 21:23:18,514
>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>
> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>> heap space
>
>         at
>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>
>         at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>
>         at
>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>
>         at
>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>
>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
>
>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>
>         at java.lang.Thread.run(Thread.java:619)
>
> Caused by: java.lang.OutOfMemoryError: Java heap space
>
>  INFO [FLUSH-WRITER-POOL:1] 2010-04-19 21:23:25,600 Memtable.java (line
>> 162) Completed flushing /m/cassandra/data/Keyspace1/Standard1-623-Data.db
>
> ERROR [Thread-13] 2010-04-19 21:23:18,514 CassandraDaemon.java (line 78)
>> Fatal exception in thread Thread[Thread-13,5,main]
>
> java.lang.OutOfMemoryError: Java heap space
>
> ERROR [Thread-15] 2010-04-19 21:23:18,514 CassandraDaemon.java (line 78)
>> Fatal exception in thread Thread[Thread-15,5,main]
>
> java.lang.OutOfMemoryError: Java heap space
>
> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:23:18,514 CassandraDaemon.java
>> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]
>
> java.lang.OutOfMemoryError: Java heap space
>
>
> and
>
>  INFO 21:00:31,319 GC for ConcurrentMarkSweep: 1417 ms, 206216 reclaimed
>> leaving 1094527752 used; max is 1211826176
>
> java.lang.OutOfMemoryError: Java heap space
>
> Dumping heap to java_pid28670.hprof ...
>
>  INFO 21:01:23,882 GC for ConcurrentMarkSweep: 2100 ms, 734008 reclaimed
>> leaving 1093996648 used; max is 1211826176
>
> Heap dump file created [1095841554 bytes in 12.960 secs]
>
>  INFO 21:01:45,082 GC for ConcurrentMarkSweep: 2089 ms, 769968 reclaimed
>> leaving 1093960776 used; max is 1211826176
>
> ERROR 21:01:49,559 Fatal exception in thread Thread[Hint delivery,5,main]
>
> java.lang.OutOfMemoryError: Java heap space
>
>         at java.util.Arrays.copyOf(Arrays.java:2772)
>
>         at java.util.Arrays.copyOf(Arrays.java:2746)
>
>         at java.util.ArrayList.ensureCapacity(ArrayList.java:187)
>
>         at java.util.ArrayList.add(ArrayList.java:378)
>
>         at
>> java.util.concurrent.ConcurrentSkipListMap.toList(ConcurrentSkipListMap.java:2341)
>
>         at
>> java.util.concurrent.ConcurrentSkipListMap$Values.toArray(ConcurrentSkipListMap.java:2445)
>
>         at
>> org.apache.cassandra.db.Memtable.getSliceIterator(Memtable.java:207)
>
>         at
>> org.apache.cassandra.db.filter.SliceQueryFilter.getMemColumnIterator(SliceQueryFilter.java:58)
>
>         at
>> org.apache.cassandra.db.filter.QueryFilter.getMemColumnIterator(QueryFilter.java:53)
>
>         at
>> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:816)
>
>         at
>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:750)
>
>         at
>> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:719)
>
>         at
>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:175)
>
>         at
>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>
>         at
>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>
>         at
>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>
>         at java.lang.Thread.run(Thread.java:636)
>
>  INFO 21:01:56,123 GC for ConcurrentMarkSweep: 2115 ms, 893240 reclaimed
>> leaving 1093862712 used; max is 1211826176
>
>
> and
>
> ERROR [Hint delivery] 2010-04-19 21:57:07,089 CassandraDaemon.java (line
>> 78) Fatal exception in thread Thread[Hint delivery,5,main]
>
> java.lang.RuntimeException: java.lang.RuntimeException:
>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>> heap space
>
>         at
>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>
>         at java.lang.Thread.run(Thread.java:636)
>
> Caused by: java.lang.RuntimeException:
>> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>> heap space
>
>         at
>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:209)
>
>         at
>> org.apache.cassandra.db.HintedHandOffManager.access$000(HintedHandOffManager.java:80)
>
>         at
>> org.apache.cassandra.db.HintedHandOffManager$1.runMayThrow(HintedHandOffManager.java:100)
>
>         at
>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>
>         ... 1 more
>
> Caused by: java.util.concurrent.ExecutionException:
>> java.lang.OutOfMemoryError: Java heap space
>
>         at
>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>
>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>
>         at
>> org.apache.cassandra.db.HintedHandOffManager.deliverAllHints(HintedHandOffManager.java:205)
>
>         ... 4 more
>
> Caused by: java.lang.OutOfMemoryError: Java heap space
>
> ERROR [MESSAGE-DESERIALIZER-POOL:1] 2010-04-19 21:57:07,089
>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>
> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>> heap space
>
>         at
>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>
>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>
>         at
>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>
>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>
>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>
>         at java.lang.Thread.run(Thread.java:636)
>
> Caused by: java.lang.OutOfMemoryError: Java heap space
>
> ERROR [COMPACTION-POOL:1] 2010-04-19 21:57:07,089
>> DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
>
> java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java
>> heap space
>
>         at
>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>
>         at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>
>         at
>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
>
>         at
>> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>
>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1118)
>
>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>
>         at java.lang.Thread.run(Thread.java:636)
>
> Caused by: java.lang.OutOfMemoryError: Java heap space
>
> ERROR [CACHETABLE-TIMER-1] 2010-04-19 21:56:29,572 CassandraDaemon.java
>> (line 78) Fatal exception in thread Thread[CACHETABLE-TIMER-1,5,main]
>
> java.lang.OutOfMemoryError: Java heap space
>
>         at java.util.HashMap.<init>(HashMap.java:226)        at
>> org.apache.cassandra.utils.ExpiringMap$CacheMonitor.run(ExpiringMap.java:76)
>>        at java.util.TimerThread.mainLoop(Timer.java:534)        at
>> java.util.TimerThread.run(Timer.java:484)
>
>
>
>