You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by André Cruz <an...@co.sapo.pt> on 2013/02/08 19:46:38 UTC

Healthy JVM GC

Hello.

I've noticed I get the frequent JVM warning in the logs about the heap being full:

 WARN [ScheduledTasks:1] 2013-02-08 18:14:20,410 GCInspector.java (line 145) Heap is 0.731554347747841 full.  You may need to reduce memtable and/or cache sizes.  Cassandra will now flush up to the two largest memtables to free up memory.  Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
 WARN [ScheduledTasks:1] 2013-02-08 18:14:20,418 StorageService.java (line 2855) Flushing CFS(Keyspace='Disco', ColumnFamily='FilesPerBlock') to relieve memory pressure
 INFO [ScheduledTasks:1] 2013-02-08 18:14:20,418 ColumnFamilyStore.java (line 659) Enqueuing flush of Memtable-FilesPerBlock@1804403938(6275300/63189158 serialized/live bytes, 52227 ops)
 INFO [FlushWriter:4500] 2013-02-08 18:14:20,419 Memtable.java (line 264) Writing Memtable-FilesPerBlock@1804403938(6275300/63189158 serialized/live bytes, 52227 ops)
 INFO [FlushWriter:4500] 2013-02-08 18:14:21,059 Memtable.java (line 305) Completed flushing /servers/storage/cassandra-data/Disco/FilesPerBlock/Disco-FilesPerBlock-he-6154-Data.db (6332375 bytes) for commitlog position ReplayPosition(segmentId=1357730625412, position=10756636)
 WARN [ScheduledTasks:1] 2013-02-08 18:23:31,970 GCInspector.java (line 145) Heap is 0.6835904101057064 full.  You may need to reduce memtable and/or cache sizes.  Cassandra will now flush up to the two largest memtables to free up memory.  Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
 WARN [ScheduledTasks:1] 2013-02-08 18:23:31,971 StorageService.java (line 2855) Flushing CFS(Keyspace='Disco', ColumnFamily='BlocksKnownPerUser') to relieve memory pressure
 INFO [ScheduledTasks:1] 2013-02-08 18:23:31,972 ColumnFamilyStore.java (line 659) Enqueuing flush of Memtable-BlocksKnownPerUser@2072550435(1834642/60143054 serialized/live bytes, 67010 ops)
 INFO [FlushWriter:4501] 2013-02-08 18:23:31,972 Memtable.java (line 264) Writing Memtable-BlocksKnownPerUser@2072550435(1834642/60143054 serialized/live bytes, 67010 ops)
 INFO [FlushWriter:4501] 2013-02-08 18:23:32,827 Memtable.java (line 305) Completed flushing /servers/storage/cassandra-data/Disco/BlocksKnownPerUser/Disco-BlocksKnownPerUser-he-484930-Data.db (7404407 bytes) for commitlog position ReplayPosition(segmentId=1357730625413, position=6093472)
 WARN [ScheduledTasks:1] 2013-02-08 18:29:46,198 GCInspector.java (line 145) Heap is 0.6871977390878024 full.  You may need to reduce memtable and/or cache sizes.  Cassandra will now flush up to the two largest memtables to free up memory.  Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
 WARN [ScheduledTasks:1] 2013-02-08 18:29:46,199 StorageService.java (line 2855) Flushing CFS(Keyspace='Disco', ColumnFamily='FileRevision') to relieve memory pressure
 INFO [ScheduledTasks:1] 2013-02-08 18:29:46,200 ColumnFamilyStore.java (line 659) Enqueuing flush of Memtable-FileRevision@1526026442(7245147/63711465 serialized/live bytes, 23779 ops)
 INFO [FlushWriter:4502] 2013-02-08 18:29:46,201 Memtable.java (line 264) Writing Memtable-FileRevision@1526026442(7245147/63711465 serialized/live bytes, 23779 ops)
 INFO [FlushWriter:4502] 2013-02-08 18:29:46,769 Memtable.java (line 305) Completed flushing /servers/storage/cassandra-data/Disco/FileRevision/Disco-FileRevision-he-5438-Data.db (5480642 bytes) for commitlog position ReplayPosition(segmentId=1357730625413, position=29816878)
 INFO [ScheduledTasks:1] 2013-02-08 18:34:13,442 GCInspector.java (line 122) GC for ConcurrentMarkSweep: 352 ms for 1 collections, 5902597760 used; max is 8357150720
 WARN [ScheduledTasks:1] 2013-02-08 18:34:13,442 GCInspector.java (line 145) Heap is 0.7062930845406603 full.  You may need to reduce memtable and/or cache sizes.  Cassandra will now flush up to the two largest memtables to free up memory.  Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
 WARN [ScheduledTasks:1] 2013-02-08 18:34:13,443 StorageService.java (line 2855) Flushing CFS(Keyspace='Disco', ColumnFamily='NamespaceFile') to relieve memory pressure
 INFO [ScheduledTasks:1] 2013-02-08 18:34:13,443 ColumnFamilyStore.java (line 659) Enqueuing flush of Memtable-NamespaceFile@1719163849(14395839/64692951 serialized/live bytes, 51765 ops)
 INFO [FlushWriter:4503] 2013-02-08 18:34:13,446 Memtable.java (line 264) Writing Memtable-NamespaceFile@1719163849(14395839/64692951 serialized/live bytes, 51765 ops)
 INFO [FlushWriter:4503] 2013-02-08 18:34:14,173 Memtable.java (line 305) Completed flushing /servers/storage/cassandra-data/Disco/NamespaceFile/Disco-NamespaceFile-he-5516-Data.db (7747535 bytes) for commitlog position ReplayPosition(segmentId=1357730625414, position=14940209)
 WARN [ScheduledTasks:1] 2013-02-08 18:37:00,593 GCInspector.java (line 145) Heap is 0.6828223856659127 full.  You may need to reduce memtable and/or cache sizes.  Cassandra will now flush up to the two largest memtables to free up memory.  Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
 WARN [ScheduledTasks:1] 2013-02-08 18:37:00,593 StorageService.java (line 2855) Flushing CFS(Keyspace='Disco', ColumnFamily='RevisionLog') to relieve memory pressure
 INFO [ScheduledTasks:1] 2013-02-08 18:37:00,594 ColumnFamilyStore.java (line 659) Enqueuing flush of Memtable-RevisionLog@726433503(12434137/65978509 serialized/live bytes, 59638 ops)
 INFO [FlushWriter:4504] 2013-02-08 18:37:00,596 Memtable.java (line 264) Writing Memtable-RevisionLog@726433503(12434137/65978509 serialized/live bytes, 59638 ops)
 INFO [FlushWriter:4504] 2013-02-08 18:37:01,523 Memtable.java (line 305) Completed flushing /servers/storage/cassandra-data/Disco/RevisionLog/Disco-RevisionLog-he-1664-Data.db (2366787 bytes) for commitlog position ReplayPosition(segmentId=1357730625414, position=27626400)


Is this healthy GC behaviour? Or should I adjust the memory/gc settings? This machine has 32GB RAM and my current JVM settings are:

-XX:+UseThreadPriorities 
-XX:ThreadPriorityPolicy=42 
-Xms8049M 
-Xmx8049M 
-Xmn800M 
-XX:+HeapDumpOnOutOfMemoryError 
-Xss196k 
-XX:+UseParNewGC 
-XX:+UseConcMarkSweepGC 
-XX:+CMSParallelRemarkEnabled 
-XX:SurvivorRatio=8 
-XX:MaxTenuringThreshold=1 
-XX:CMSInitiatingOccupancyFraction=75 
-XX:+UseCMSInitiatingOccupancyOnly 

Thanks,
André Cruz



Re: Healthy JVM GC

Posted by aaron morton <aa...@thelastpickle.com>.
> -Xms8049M 
> -Xmx8049M 
> -Xmn800M 
That's a healthy amount of memory for the JVM. 

If you are using Row Caches, reduce their size and/or ensure you are using Serializing (off heap) caches.
Also consider changing the yaml conf flush_largest_memtables_at from 0.75 to  0.80 so it is different to the CMS occupancy setting. 
If you have a lot of rows, 100's of millions, consider reducing the bloom filter false positive ratio. 

Or just upgrade to 1.2 which uses less JVM memory. 

Cheers

-----------------
Aaron Morton
Freelance Cassandra Developer
New Zealand

@aaronmorton
http://www.thelastpickle.com

On 9/02/2013, at 7:46 AM, André Cruz <an...@co.sapo.pt> wrote:

> Hello.
> 
> I've noticed I get the frequent JVM warning in the logs about the heap being full:
> 
> WARN [ScheduledTasks:1] 2013-02-08 18:14:20,410 GCInspector.java (line 145) Heap is 0.731554347747841 full.  You may need to reduce memtable and/or cache sizes.  Cassandra will now flush up to the two largest memtables to free up memory.  Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
> WARN [ScheduledTasks:1] 2013-02-08 18:14:20,418 StorageService.java (line 2855) Flushing CFS(Keyspace='Disco', ColumnFamily='FilesPerBlock') to relieve memory pressure
> INFO [ScheduledTasks:1] 2013-02-08 18:14:20,418 ColumnFamilyStore.java (line 659) Enqueuing flush of Memtable-FilesPerBlock@1804403938(6275300/63189158 serialized/live bytes, 52227 ops)
> INFO [FlushWriter:4500] 2013-02-08 18:14:20,419 Memtable.java (line 264) Writing Memtable-FilesPerBlock@1804403938(6275300/63189158 serialized/live bytes, 52227 ops)
> INFO [FlushWriter:4500] 2013-02-08 18:14:21,059 Memtable.java (line 305) Completed flushing /servers/storage/cassandra-data/Disco/FilesPerBlock/Disco-FilesPerBlock-he-6154-Data.db (6332375 bytes) for commitlog position ReplayPosition(segmentId=1357730625412, position=10756636)
> WARN [ScheduledTasks:1] 2013-02-08 18:23:31,970 GCInspector.java (line 145) Heap is 0.6835904101057064 full.  You may need to reduce memtable and/or cache sizes.  Cassandra will now flush up to the two largest memtables to free up memory.  Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
> WARN [ScheduledTasks:1] 2013-02-08 18:23:31,971 StorageService.java (line 2855) Flushing CFS(Keyspace='Disco', ColumnFamily='BlocksKnownPerUser') to relieve memory pressure
> INFO [ScheduledTasks:1] 2013-02-08 18:23:31,972 ColumnFamilyStore.java (line 659) Enqueuing flush of Memtable-BlocksKnownPerUser@2072550435(1834642/60143054 serialized/live bytes, 67010 ops)
> INFO [FlushWriter:4501] 2013-02-08 18:23:31,972 Memtable.java (line 264) Writing Memtable-BlocksKnownPerUser@2072550435(1834642/60143054 serialized/live bytes, 67010 ops)
> INFO [FlushWriter:4501] 2013-02-08 18:23:32,827 Memtable.java (line 305) Completed flushing /servers/storage/cassandra-data/Disco/BlocksKnownPerUser/Disco-BlocksKnownPerUser-he-484930-Data.db (7404407 bytes) for commitlog position ReplayPosition(segmentId=1357730625413, position=6093472)
> WARN [ScheduledTasks:1] 2013-02-08 18:29:46,198 GCInspector.java (line 145) Heap is 0.6871977390878024 full.  You may need to reduce memtable and/or cache sizes.  Cassandra will now flush up to the two largest memtables to free up memory.  Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
> WARN [ScheduledTasks:1] 2013-02-08 18:29:46,199 StorageService.java (line 2855) Flushing CFS(Keyspace='Disco', ColumnFamily='FileRevision') to relieve memory pressure
> INFO [ScheduledTasks:1] 2013-02-08 18:29:46,200 ColumnFamilyStore.java (line 659) Enqueuing flush of Memtable-FileRevision@1526026442(7245147/63711465 serialized/live bytes, 23779 ops)
> INFO [FlushWriter:4502] 2013-02-08 18:29:46,201 Memtable.java (line 264) Writing Memtable-FileRevision@1526026442(7245147/63711465 serialized/live bytes, 23779 ops)
> INFO [FlushWriter:4502] 2013-02-08 18:29:46,769 Memtable.java (line 305) Completed flushing /servers/storage/cassandra-data/Disco/FileRevision/Disco-FileRevision-he-5438-Data.db (5480642 bytes) for commitlog position ReplayPosition(segmentId=1357730625413, position=29816878)
> INFO [ScheduledTasks:1] 2013-02-08 18:34:13,442 GCInspector.java (line 122) GC for ConcurrentMarkSweep: 352 ms for 1 collections, 5902597760 used; max is 8357150720
> WARN [ScheduledTasks:1] 2013-02-08 18:34:13,442 GCInspector.java (line 145) Heap is 0.7062930845406603 full.  You may need to reduce memtable and/or cache sizes.  Cassandra will now flush up to the two largest memtables to free up memory.  Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
> WARN [ScheduledTasks:1] 2013-02-08 18:34:13,443 StorageService.java (line 2855) Flushing CFS(Keyspace='Disco', ColumnFamily='NamespaceFile') to relieve memory pressure
> INFO [ScheduledTasks:1] 2013-02-08 18:34:13,443 ColumnFamilyStore.java (line 659) Enqueuing flush of Memtable-NamespaceFile@1719163849(14395839/64692951 serialized/live bytes, 51765 ops)
> INFO [FlushWriter:4503] 2013-02-08 18:34:13,446 Memtable.java (line 264) Writing Memtable-NamespaceFile@1719163849(14395839/64692951 serialized/live bytes, 51765 ops)
> INFO [FlushWriter:4503] 2013-02-08 18:34:14,173 Memtable.java (line 305) Completed flushing /servers/storage/cassandra-data/Disco/NamespaceFile/Disco-NamespaceFile-he-5516-Data.db (7747535 bytes) for commitlog position ReplayPosition(segmentId=1357730625414, position=14940209)
> WARN [ScheduledTasks:1] 2013-02-08 18:37:00,593 GCInspector.java (line 145) Heap is 0.6828223856659127 full.  You may need to reduce memtable and/or cache sizes.  Cassandra will now flush up to the two largest memtables to free up memory.  Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
> WARN [ScheduledTasks:1] 2013-02-08 18:37:00,593 StorageService.java (line 2855) Flushing CFS(Keyspace='Disco', ColumnFamily='RevisionLog') to relieve memory pressure
> INFO [ScheduledTasks:1] 2013-02-08 18:37:00,594 ColumnFamilyStore.java (line 659) Enqueuing flush of Memtable-RevisionLog@726433503(12434137/65978509 serialized/live bytes, 59638 ops)
> INFO [FlushWriter:4504] 2013-02-08 18:37:00,596 Memtable.java (line 264) Writing Memtable-RevisionLog@726433503(12434137/65978509 serialized/live bytes, 59638 ops)
> INFO [FlushWriter:4504] 2013-02-08 18:37:01,523 Memtable.java (line 305) Completed flushing /servers/storage/cassandra-data/Disco/RevisionLog/Disco-RevisionLog-he-1664-Data.db (2366787 bytes) for commitlog position ReplayPosition(segmentId=1357730625414, position=27626400)
> 
> 
> Is this healthy GC behaviour? Or should I adjust the memory/gc settings? This machine has 32GB RAM and my current JVM settings are:
> 
> -XX:+UseThreadPriorities 
> -XX:ThreadPriorityPolicy=42 
> -Xms8049M 
> -Xmx8049M 
> -Xmn800M 
> -XX:+HeapDumpOnOutOfMemoryError 
> -Xss196k 
> -XX:+UseParNewGC 
> -XX:+UseConcMarkSweepGC 
> -XX:+CMSParallelRemarkEnabled 
> -XX:SurvivorRatio=8 
> -XX:MaxTenuringThreshold=1 
> -XX:CMSInitiatingOccupancyFraction=75 
> -XX:+UseCMSInitiatingOccupancyOnly 
> 
> Thanks,
> André Cruz
> 
>