You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "stack (JIRA)" <ji...@apache.org> on 2010/01/11 21:11:54 UTC

[jira] Created: (HBASE-2106) Cache uses only 2G of heap no matter how much allocated?

Cache uses only 2G of heap no matter how much allocated?
--------------------------------------------------------

                 Key: HBASE-2106
                 URL: https://issues.apache.org/jira/browse/HBASE-2106
             Project: Hadoop HBase
          Issue Type: Bug
            Reporter: stack


Here is a question from Adam Silberstein who is having trouble writing this list:

"I'm running on a particular cluster of machines and seeing worse than expected performance.  I've run on a different, but identical, cluster of machines and gotten about 2-4x better throughput and latency.  I've checked all of the configuration settings and they are all the same.  The only thing I observe that looks odd is the memory usage by each region server.  I've allocated 6 GB heap space to each.  There is a total of 20GB of records on each.  When I use "top" to observe memory usage, I see that the region server process does have 6GB of virtual memory.  When I start running a workload, I can see the amount of resident memory climbing.  However, it always stops at 2GB.  Since I have 20 GB on each region server, I'd expect the process to use all the memory allocated it

"Is there a reasonable explanation for why it stops at 2GB, and/or has anyone seen this happen"

I took a look at the cache code and I see that it has an int for maximum size in the constructor argument though internally its using a long for maximum size.  I wonder if this is the issue?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.