You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by 王一锋 <wa...@aspire-tech.com> on 2010/07/20 08:47:46 UTC

What is consuming the heap?

In my cluster, I have set both KeysCached and RowsCached of my column family on all nodes to "0", 
but it still happened that a few nodes crashed because of OutOfMemory
(from the gc.log, a full gc wasn't able to free up any memory space), 

what else can be consuming the heap?

heap size is 10G and the load of data per node was around 300G, 16-core CPU, 1T HDD

2010-07-20 

SV: SV: What is consuming the heap?

Posted by Thorvaldsson Justus <ju...@svenskaspel.se>.
There is some more information here about memory usage.
http://wiki.apache.org/cassandra/StorageConfiguration
/J

Från: 王一锋 [mailto:wangyifeng@aspire-tech.com]
Skickat: den 20 juli 2010 08:56
Till: user
Ämne: Re: SV: What is consuming the heap?


No, I don't think so. Because I'm not using supercolumn and size of a column will not exceed 1M

2010-07-20
________________________________

________________________________
发件人: Thorvaldsson Justus
发送时间: 2010-07-20  14:52:22
收件人: 'user@cassandra.apache.org'
抄送:
主题: SV: What is consuming the heap?
Supercolumn/column must fit into node memory
It could be?
/Justus
Från: 王一锋 [mailto:wangyifeng@aspire-tech.com]
Skickat: den 20 juli 2010 08:48
Till: user
Ämne: What is consuming the heap?

In my cluster, I have set both KeysCached and RowsCached of my column family on all nodes to "0",
but it still happened that a few nodes crashed because of OutOfMemory
(from the gc.log, a full gc wasn't able to free up any memory space),

what else can be consuming the heap?

heap size is 10G and the load of data per node was around 300G, 16-core CPU, 1T HDD

2010-07-20
________________________________

Re: SV: What is consuming the heap?

Posted by 王一锋 <wa...@aspire-tech.com>.
No, I don't think so. Because I'm not using supercolumn and size of a column will not exceed 1M

2010-07-20 







发件人: Thorvaldsson Justus 
发送时间: 2010-07-20  14:52:22 
收件人: 'user@cassandra.apache.org' 
抄送: 
主题: SV: What is consuming the heap? 
 
Supercolumn/column must fit into node memory
It could be?
/Justus
Från: 王一锋 [mailto:wangyifeng@aspire-tech.com] 
Skickat: den 20 juli 2010 08:48
Till: user
Ämne: What is consuming the heap?
 
In my cluster, I have set both KeysCached and RowsCached of my column family on all nodes to "0", 
but it still happened that a few nodes crashed because of OutOfMemory
(from the gc.log, a full gc wasn't able to free up any memory space), 
 
what else can be consuming the heap?
 
heap size is 10G and the load of data per node was around 300G, 16-core CPU, 1T HDD
 
2010-07-20 

SV: What is consuming the heap?

Posted by Thorvaldsson Justus <ju...@svenskaspel.se>.
Supercolumn/column must fit into node memory
It could be?
/Justus
Från: 王一锋 [mailto:wangyifeng@aspire-tech.com]
Skickat: den 20 juli 2010 08:48
Till: user
Ämne: What is consuming the heap?

In my cluster, I have set both KeysCached and RowsCached of my column family on all nodes to "0",
but it still happened that a few nodes crashed because of OutOfMemory
(from the gc.log, a full gc wasn't able to free up any memory space),

what else can be consuming the heap?

heap size is 10G and the load of data per node was around 300G, 16-core CPU, 1T HDD

2010-07-20
________________________________

Re: Re: Re: What is consuming the heap?

Posted by Benjamin Black <b...@b3k.us>.
Have you changed the default Memtable settings?  Are you running on
nodes with a single 1TB drive?  Are you monitoring your I/O load on
the nodes?

On Thu, Jul 22, 2010 at 6:40 PM, 王一锋 <wa...@aspire-tech.com> wrote:
> The version we are using is 0.6.1
>
> 2010-07-23
> ________________________________
>
> ________________________________
> 发件人: 王一锋
> 发送时间: 2010-07-23  09:38:15
> 收件人: user
> 抄送:
> 主题: Re: Re: Re: What is consuming the heap?
> Yes, we are doing a lot of inserts.
>
> But how can CASSANDRA-1042 cause an OutOfMemory?
> And we are using multigetSlice(). We are not doing any get_range_slice() at
> all.
>
> 2010-07-23
> ________________________________
>
> ________________________________
> 发件人: Jonathan Ellis
> 发送时间: 2010-07-21  21:17:21
> 收件人: user
> 抄送:
> 主题: Re: Re: What is consuming the heap?
> On Tue, Jul 20, 2010 at 11:33 PM, Peter Schuller
> <pe...@infidyne.com> wrote:
>>>  INFO [GC inspection] 2010-07-21 01:01:49,661 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 11748 ms, 413673472 reclaimed leaving 9779542600 used; max is 10873667584
>>> ERROR [Thread-35] 2010-07-21 01:02:10,941 CassandraDaemon.java (line 78) Fatal exception in thread Thread[Thread-35,5,main]
>>> java.lang.OutOfMemoryError: Java heap space
>>>  INFO [GC inspection] 2010-07-21 01:02:10,958 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 10043 ms, 259576 reclaimed leaving 10172790816 used; max is 10873667584
>>
>> So that confirms a "legitimate" out-of-memory condition in the sense
>> that CMS is reclaiming extremely little and the live set after a
>> concurrent mark/sweep is indeed around the 10 gig.
> Are you doing a lot of inserts?  You might be hitting
> https://issues.apache.org/jira/browse/CASSANDRA-1042
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of Riptano, the source for professional Cassandra support
> http://riptano.com

Re: Re: Re: What is consuming the heap?

Posted by 王一锋 <wa...@aspire-tech.com>.
The version we are using is 0.6.1

2010-07-23 







发件人: 王一锋 
发送时间: 2010-07-23  09:38:15 
收件人: user 
抄送: 
主题: Re: Re: Re: What is consuming the heap? 
 
Yes, we are doing a lot of inserts.

But how can CASSANDRA-1042 cause an OutOfMemory?
And we are using multigetSlice(). We are not doing any get_range_slice() at all.

2010-07-23 







发件人: Jonathan Ellis 
发送时间: 2010-07-21  21:17:21 
收件人: user 
抄送: 
主题: Re: Re: What is consuming the heap? 
On Tue, Jul 20, 2010 at 11:33 PM, Peter Schuller
<pe...@infidyne.com> wrote:
>>  INFO [GC inspection] 2010-07-21 01:01:49,661 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 11748 ms, 413673472 reclaimed leaving 9779542600 used; max is 10873667584
>> ERROR [Thread-35] 2010-07-21 01:02:10,941 CassandraDaemon.java (line 78) Fatal exception in thread Thread[Thread-35,5,main]
>> java.lang.OutOfMemoryError: Java heap space
>>  INFO [GC inspection] 2010-07-21 01:02:10,958 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 10043 ms, 259576 reclaimed leaving 10172790816 used; max is 10873667584
>
> So that confirms a "legitimate" out-of-memory condition in the sense
> that CMS is reclaiming extremely little and the live set after a
> concurrent mark/sweep is indeed around the 10 gig.
Are you doing a lot of inserts?  You might be hitting
https://issues.apache.org/jira/browse/CASSANDRA-1042
-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Re: Re: Re: What is consuming the heap?

Posted by 王一锋 <wa...@aspire-tech.com>.
Yes, we are doing a lot of inserts.

But how can CASSANDRA-1042 cause an OutOfMemory?
And we are using multigetSlice(). We are not doing any get_range_slice() at all.

2010-07-23 







发件人: Jonathan Ellis 
发送时间: 2010-07-21  21:17:21 
收件人: user 
抄送: 
主题: Re: Re: What is consuming the heap? 
 
On Tue, Jul 20, 2010 at 11:33 PM, Peter Schuller
<pe...@infidyne.com> wrote:
>>  INFO [GC inspection] 2010-07-21 01:01:49,661 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 11748 ms, 413673472 reclaimed leaving 9779542600 used; max is 10873667584
>> ERROR [Thread-35] 2010-07-21 01:02:10,941 CassandraDaemon.java (line 78) Fatal exception in thread Thread[Thread-35,5,main]
>> java.lang.OutOfMemoryError: Java heap space
>>  INFO [GC inspection] 2010-07-21 01:02:10,958 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 10043 ms, 259576 reclaimed leaving 10172790816 used; max is 10873667584
>
> So that confirms a "legitimate" out-of-memory condition in the sense
> that CMS is reclaiming extremely little and the live set after a
> concurrent mark/sweep is indeed around the 10 gig.
Are you doing a lot of inserts?  You might be hitting
https://issues.apache.org/jira/browse/CASSANDRA-1042
-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Re: Re: What is consuming the heap?

Posted by Jonathan Ellis <jb...@gmail.com>.
On Tue, Jul 20, 2010 at 11:33 PM, Peter Schuller
<pe...@infidyne.com> wrote:
>>  INFO [GC inspection] 2010-07-21 01:01:49,661 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 11748 ms, 413673472 reclaimed leaving 9779542600 used; max is 10873667584
>> ERROR [Thread-35] 2010-07-21 01:02:10,941 CassandraDaemon.java (line 78) Fatal exception in thread Thread[Thread-35,5,main]
>> java.lang.OutOfMemoryError: Java heap space
>>  INFO [GC inspection] 2010-07-21 01:02:10,958 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 10043 ms, 259576 reclaimed leaving 10172790816 used; max is 10873667584
>
> So that confirms a "legitimate" out-of-memory condition in the sense
> that CMS is reclaiming extremely little and the live set after a
> concurrent mark/sweep is indeed around the 10 gig.

Are you doing a lot of inserts?  You might be hitting
https://issues.apache.org/jira/browse/CASSANDRA-1042

-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Re: Re: What is consuming the heap?

Posted by Peter Schuller <pe...@infidyne.com>.
>  INFO [GC inspection] 2010-07-21 01:01:49,661 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 11748 ms, 413673472 reclaimed leaving 9779542600 used; max is 10873667584
> ERROR [Thread-35] 2010-07-21 01:02:10,941 CassandraDaemon.java (line 78) Fatal exception in thread Thread[Thread-35,5,main]
> java.lang.OutOfMemoryError: Java heap space
>  INFO [GC inspection] 2010-07-21 01:02:10,958 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 10043 ms, 259576 reclaimed leaving 10172790816 used; max is 10873667584

So that confirms a "legitimate" out-of-memory condition in the sense
that CMS is reclaiming extremely little and the live set after a
concurrent mark/sweep is indeed around the 10 gig.


-- 
/ Peter Schuller

Re: Re: What is consuming the heap?

Posted by 王一锋 <wa...@aspire-tech.com>.
I can only find these in the system.log

 INFO [GC inspection] 2010-07-21 01:01:49,661 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 11748 ms, 413673472 reclaimed leaving 9779542600 used; max is 10873667584
ERROR [Thread-35] 2010-07-21 01:02:10,941 CassandraDaemon.java (line 78) Fatal exception in thread Thread[Thread-35,5,main]
java.lang.OutOfMemoryError: Java heap space
 INFO [GC inspection] 2010-07-21 01:02:10,958 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 10043 ms, 259576 reclaimed leaving 10172790816 used; max is 10873667584




2010-07-21 







发件人: Jonathan Ellis 
发送时间: 2010-07-20  19:26:11 
收件人: user 
抄送: 
主题: Re: What is consuming the heap? 
 
you should post the full stack trace.
2010/7/20 王一锋 <wa...@aspire-tech.com>:
> In my cluster, I have set both KeysCached and RowsCached of my column family
> on all nodes to "0",
> but it still happened that a few nodes crashed because of OutOfMemory
> (from the gc.log, a full gc wasn't able to free up any memory space),
>
> what else can be consuming the heap?
>
> heap size is 10G and the load of data per node was around 300G, 16-core CPU,
> 1T HDD
>
> 2010-07-20
> ________________________________
-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Re: What is consuming the heap?

Posted by Jonathan Ellis <jb...@gmail.com>.
you should post the full stack trace.

2010/7/20 王一锋 <wa...@aspire-tech.com>:
> In my cluster, I have set both KeysCached and RowsCached of my column family
> on all nodes to "0",
> but it still happened that a few nodes crashed because of OutOfMemory
> (from the gc.log, a full gc wasn't able to free up any memory space),
>
> what else can be consuming the heap?
>
> heap size is 10G and the load of data per node was around 300G, 16-core CPU,
> 1T HDD
>
> 2010-07-20
> ________________________________



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Re: Re: Re: What is consuming the heap?

Posted by 王一锋 <wa...@aspire-tech.com>.
no, I'm using QUORUM for both writes and reads
Replication factor is 3

2010-07-21 







发件人: Dathan Pattishall 
发送时间: 2010-07-21  12:51:32 
收件人: user 
抄送: 
主题: Re: Re: What is consuming the heap? 
 
By off chance on writes are you using ConsistencyLevel::ZERO?





On Tue, Jul 20, 2010 at 9:41 PM, 王一锋 <wa...@aspire-tech.com> wrote:

So the bloom filters reside in memory completely?

We do have a lot of small values, hundreds of millions of columns in a columnfamily.

I count the total size of *-Filter.db files in my keyspace, it's 436,747,815bytes.

I guess this means it won't consume a major part of 10g heap space


2010-07-21 







发件人: Peter Schuller 
发送时间: 2010-07-20  21:45:08 
收件人: user 
抄送: 
主题: Re: What is consuming the heap? 
> heap size is 10G and the load of data per node was around 300G, 16-core CPU,
Are the 300 GB made up of *really* small values? Per SS table bloom
filters do consume memory, but you'd have to have a *lot* of *really*
small values for a 300 GB database to cause bloom filters to be a
significant part of a 10 GB h eap.
-- 
/ Peter Schuller

Re: Re: What is consuming the heap?

Posted by Dathan Pattishall <da...@gmail.com>.
By off chance on writes are you using ConsistencyLevel::ZERO?




On Tue, Jul 20, 2010 at 9:41 PM, 王一锋 <wa...@aspire-tech.com> wrote:

>  So the bloom filters reside in memory completely?
>
> We do have a lot of small values, hundreds of millions of columns in a
> columnfamily.
>
> I count the total size of *-Filter.db files in my keyspace, it's
> 436,747,815bytes.
>
> I guess this means it won't consume a major part of 10g heap space
>
>
> 2010-07-21
> ------------------------------
>
> ------------------------------
> *发件人:* Peter Schuller
> *发送时间:* 2010-07-20  21:45:08
> *收件人:* user
> *抄送:*
> *主题:* Re: What is consuming the heap?
>
> > heap size is 10G and the load of data per node was around 300G, 16-core CPU,
>  Are the 300 GB made up of *really* small values? Per SS table bloom
> filters do consume memory, but you'd have to have a *lot* of *really*
> small values for a 300 GB database to cause bloom filters to be a
> significant part of a 10 GB h eap.
>  --
> / Peter Schuller
>

Re: Re: Re: What is consuming the heap?

Posted by 王一锋 <wa...@aspire-tech.com>.
Yes, I'm running with defaults settings otherwise.
For cache sizes, I've tried '0' for non-cached, '1' for full cached and a fixed value of 500000, for KeysCached, RowsCached was using default everytime.
So I don't think the problem is about the cache.
Concurrent read was 32, write was 64
I also tried 320 and 640

The read/write ratio is about 2/1

How much memory will it need to do a compaction?
another 2 nodes went down last night. They were doing a compaction before they went down, judging from the timestamp of the *tmp* files in the data folder. 

Stack trace for node 1
 INFO [GC inspection] 2010-07-23 04:13:24,517 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 31275 ms, 29578704 reclaimed leaving 10713006792 used; max is 10873667584
ERROR [MESSAGE-DESERIALIZER-POOL:1] 2010-07-23 04:14:30,656 DebuggableThreadPoolExecutor.java (line 94) Error in executor futuretask
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
        at java.util.concurrent.FutureTask.get(FutureTask.java:83)
        at org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.OutOfMemoryError: Java heap space
        at org.apache.cassandra.net.MessageSerializer.deserialize(Message.java:138)
        at org.apache.cassandra.net.MessageDeserializationTask.run(MessageDeserializationTask.java:45)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        ... 2 more

Stack trace for node 2
 INFO [COMMIT-LOG-WRITER] 2010-07-23 01:41:06,550 CommitLogSegment.java (line 50) Creating new commitlog segment /opt/crawler/cassandra/sysdata/commitlog/CommitLog-1279820466550.log
 INFO [Timer-1] 2010-07-23 01:41:09,027 Gossiper.java (line 179) InetAddress /183.62.134.31 is now dead.
 INFO [ROW-MUTATION-STAGE:45] 2010-07-23 01:41:09,279 ColumnFamilyStore.java (line 357) source_page has reached its threshold; switching in a fresh Memtable at CommitLogContext(file='/opt/crawler/cassandra/sysdata/commitlog/CommitLog-1279820466550.log', position=9413)
 INFO [ROW-MUTATION-STAGE:45] 2010-07-23 01:41:09,322 ColumnFamilyStore.java (line 609) Enqueuing flush of Memtable(source_page)@1343553539
 INFO [FLUSH-WRITER-POOL:1] 2010-07-23 01:41:09,323 Memtable.java (line 148) Writing Memtable(source_page)@1343553539
 INFO [GMFD:1] 2010-07-23 01:41:09,349 Gossiper.java (line 568) InetAddress /183.62.134.30 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,349 Gossiper.java (line 568) InetAddress /183.62.134.31 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.28 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.26 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.27 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.24 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.25 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.22 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,350 Gossiper.java (line 568) InetAddress /183.62.134.23 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,351 Gossiper.java (line 568) InetAddress /183.62.134.33 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,351 Gossiper.java (line 568) InetAddress /183.62.134.32 is now UP
 INFO [GMFD:1] 2010-07-23 01:41:09,351 Gossiper.java (line 568) InetAddress /183.62.134.34 is now UP
 INFO [GC inspection] 2010-07-23 01:41:24,192 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 12908 ms, 413977296 reclaimed leaving 9524655928 used; max is 10873667584
 INFO [Timer-1] 2010-07-23 01:41:50,867 Gossiper.java (line 179) InetAddress /183.62.134.34 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,871 Gossiper.java (line 179) InetAddress /183.62.134.33 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,871 Gossiper.java (line 179) InetAddress /183.62.134.32 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,871 Gossiper.java (line 179) InetAddress /183.62.134.31 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,872 Gossiper.java (line 179) InetAddress /183.62.134.30 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,872 Gossiper.java (line 179) InetAddress /183.62.134.28 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,872 Gossiper.java (line 179) InetAddress /183.62.134.27 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,872 Gossiper.java (line 179) InetAddress /183.62.134.26 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,873 Gossiper.java (line 179) InetAddress /183.62.134.25 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,873 Gossiper.java (line 179) InetAddress /183.62.134.24 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,873 Gossiper.java (line 179) InetAddress /183.62.134.23 is now dead.
 INFO [Timer-1] 2010-07-23 01:41:50,873 Gossiper.java (line 179) InetAddress /183.62.134.22 is now dead.
 INFO [GC inspection] 2010-07-23 01:41:50,875 GCInspector.java (line 110) GC for ConcurrentMarkSweep: 11964 ms, 226808 reclaimed leaving 10303521344 used; max is 10873667584
ERROR [Thread-21] 2010-07-23 01:41:50,890 CassandraDaemon.java (line 78) Fatal exception in thread Thread[Thread-21,5,main]
java.lang.OutOfMemoryError: Java heap space
        at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:71)



2010-07-23 







发件人: Peter Schuller 
发送时间: 2010-07-21  14:35:36 
收件人: user 
抄送: 
主题: Re: Re: What is consuming the heap? 
 
> So the bloom filters reside in memory completely?
Yes. The point of bloom filters in cassandra is to act as a fast way
to determine whether sstables need to be consulted. This check
involves random access into the bloom filter. It needs to be in memory
for this to be effective.
But due to the nature of bloom filters you don't need a lot of memory
per key in the database, so it scales pretty well.
> I count the total size of *-Filter.db files in my keyspace, it's
> 436,747,815bytes.
>
> I guess this means it won't consume a major part of 10g heap space
Right, doesn't sound like bloom filters are the cause.
Are you running with defaults settings otherwise - cache sizes, flush
thresholds, etc?
-- 
/ Peter Schuller

Re: Re: What is consuming the heap?

Posted by Peter Schuller <pe...@infidyne.com>.
> So the bloom filters reside in memory completely?

Yes. The point of bloom filters in cassandra is to act as a fast way
to determine whether sstables need to be consulted. This check
involves random access into the bloom filter. It needs to be in memory
for this to be effective.

But due to the nature of bloom filters you don't need a lot of memory
per key in the database, so it scales pretty well.

> I count the total size of *-Filter.db files in my keyspace, it's
> 436,747,815bytes.
>
> I guess this means it won't consume a major part of 10g heap space

Right, doesn't sound like bloom filters are the cause.

Are you running with defaults settings otherwise - cache sizes, flush
thresholds, etc?

-- 
/ Peter Schuller

Re: Re: What is consuming the heap?

Posted by 王一锋 <wa...@aspire-tech.com>.
So the bloom filters reside in memory completely?

We do have a lot of small values, hundreds of millions of columns in a columnfamily.

I count the total size of *-Filter.db files in my keyspace, it's 436,747,815bytes.

I guess this means it won't consume a major part of 10g heap space


2010-07-21 







发件人: Peter Schuller 
发送时间: 2010-07-20  21:45:08 
收件人: user 
抄送: 
主题: Re: What is consuming the heap? 
 
> heap size is 10G and the load of data per node was around 300G, 16-core CPU,
Are the 300 GB made up of *really* small values? Per SS table bloom
filters do consume memory, but you'd have to have a *lot* of *really*
small values for a 300 GB database to cause bloom filters to be a
significant part of a 10 GB h eap.
-- 
/ Peter Schuller

Re: What is consuming the heap?

Posted by Peter Schuller <pe...@infidyne.com>.
> heap size is 10G and the load of data per node was around 300G, 16-core CPU,

Are the 300 GB made up of *really* small values? Per SS table bloom
filters do consume memory, but you'd have to have a *lot* of *really*
small values for a 300 GB database to cause bloom filters to be a
significant part of a 10 GB h eap.

-- 
/ Peter Schuller