You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by stack <st...@duboce.net> on 2010/01/11 21:11:29 UTC

Cache only uses 2G?

Here is a question from Adam Silberstein who is having trouble writing this
list:

"I’m running on a particular cluster of machines and seeing worse than
expected performance.  I’ve run on a different, but identical, cluster of
machines and gotten about 2-4x better throughput and latency.  I’ve checked
all of the configuration settings and they are all the same.  The only thing
I observe that looks odd is the memory usage by each region server.  I’ve
allocated 6 GB heap space to each.  There is a total of 20GB of records on
each.  When I use “top” to observe memory usage, I see that the region
server process does have 6GB of virtual memory.  When I start running a
workload, I can see the amount of resident memory climbing.  However, it
always stops at 2GB.  Since I have 20 GB on each region server, I’d expect
the process to use all the memory allocated it

"Is there a reasonable explanation for why it stops at 2GB, and/or has
anyone seen this happen"

I took a look at the cache code and I see that it has an int for maximum
size in the constructor argument though internally its using a long for
maximum size.  I wonder if this is the issue?  I've filed an issue:
hbase-2106.

St.Ack