You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Todd Lipcon (JIRA)" <ji...@apache.org> on 2010/06/21 10:34:23 UTC

[jira] Created: (HBASE-2761) GC overhead limit exceeded in client

GC overhead limit exceeded in client
------------------------------------

                 Key: HBASE-2761
                 URL: https://issues.apache.org/jira/browse/HBASE-2761
             Project: HBase
          Issue Type: Bug
          Components: client
    Affects Versions: 0.21.0
            Reporter: Todd Lipcon
            Priority: Blocker
             Fix For: 0.21.0


Never seen this prior to the new meta prefetch stuff. Saw it tonight on a YCSB run after about an hour.

Exception in thread "Thread-9" java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.util.Hashtable.rehash(Hashtable.java:356)
        at java.util.Hashtable.put(Hashtable.java:412)
        at java.util.Properties.setProperty(Properties.java:143)
        at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1337)
        at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1227)
        at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1156)
        at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:1198)
        at org.apache.hadoop.hbase.HBaseConfiguration.hashCode(HBaseConfiguration.java:112)
        at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:121)
        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:130)
        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:99)
        at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102)
        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.prefetchRegionCache(HConnectionManager.java:733)
        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:784)
        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:678)
        at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfPuts(HConnectionManager.java:1424)
        at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:660)
        at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:545)


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HBASE-2761) GC overhead limit exceeded in client

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880890#action_12880890 ] 

Todd Lipcon commented on HBASE-2761:
------------------------------------

Didn't have heap dump enabled for the JVM. Was running with a default heap (what's that, 128MB?) but I could run the same program just fine on trunk last week. Note that it's a "GC overhead limit" error, and not "out of memory"

Note from the stack trace that we're initting a new HTable in every prefetch, without passing in a configuration - also, I think we must be prefetching too often, because this was a long-running client and should have had everything in cache. This is probably hurting performance and also causing a lot of garbage spew.

> GC overhead limit exceeded in client
> ------------------------------------
>
>                 Key: HBASE-2761
>                 URL: https://issues.apache.org/jira/browse/HBASE-2761
>             Project: HBase
>          Issue Type: Bug
>          Components: client
>    Affects Versions: 0.21.0
>            Reporter: Todd Lipcon
>            Priority: Blocker
>             Fix For: 0.21.0
>
>
> Never seen this prior to the new meta prefetch stuff. Saw it tonight on a YCSB run after about an hour.
> Exception in thread "Thread-9" java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at java.util.Hashtable.rehash(Hashtable.java:356)
>         at java.util.Hashtable.put(Hashtable.java:412)
>         at java.util.Properties.setProperty(Properties.java:143)
>         at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1337)
>         at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1227)
>         at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1156)
>         at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:1198)
>         at org.apache.hadoop.hbase.HBaseConfiguration.hashCode(HBaseConfiguration.java:112)
>         at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:121)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:130)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:99)
>         at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.prefetchRegionCache(HConnectionManager.java:733)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:784)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:678)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfPuts(HConnectionManager.java:1424)
>         at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:660)
>         at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:545)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Resolved: (HBASE-2761) GC overhead limit exceeded in client

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Todd Lipcon resolved HBASE-2761.
--------------------------------

    Resolution: Invalid

Spent some time looking at heap dumps in mat, this is almost definitely HBASE-2763

> GC overhead limit exceeded in client
> ------------------------------------
>
>                 Key: HBASE-2761
>                 URL: https://issues.apache.org/jira/browse/HBASE-2761
>             Project: HBase
>          Issue Type: Bug
>          Components: client
>    Affects Versions: 0.21.0
>            Reporter: Todd Lipcon
>            Priority: Blocker
>             Fix For: 0.21.0
>
>
> Never seen this prior to the new meta prefetch stuff. Saw it tonight on a YCSB run after about an hour.
> Exception in thread "Thread-9" java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at java.util.Hashtable.rehash(Hashtable.java:356)
>         at java.util.Hashtable.put(Hashtable.java:412)
>         at java.util.Properties.setProperty(Properties.java:143)
>         at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1337)
>         at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1227)
>         at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1156)
>         at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:1198)
>         at org.apache.hadoop.hbase.HBaseConfiguration.hashCode(HBaseConfiguration.java:112)
>         at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:121)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:130)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:99)
>         at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.prefetchRegionCache(HConnectionManager.java:733)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:784)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:678)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfPuts(HConnectionManager.java:1424)
>         at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:660)
>         at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:545)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HBASE-2761) GC overhead limit exceeded in client

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880895#action_12880895 ] 

Todd Lipcon commented on HBASE-2761:
------------------------------------

Yep, JD pointed me to HDFS-2756 which has the metascan bug where it makes a new configuration. My bet is that all the configuration loading/hashing was spewing garbage like crazy.

I'll also work on some more tests for prefetch, I do think there are some bugs lurking.

> GC overhead limit exceeded in client
> ------------------------------------
>
>                 Key: HBASE-2761
>                 URL: https://issues.apache.org/jira/browse/HBASE-2761
>             Project: HBase
>          Issue Type: Bug
>          Components: client
>    Affects Versions: 0.21.0
>            Reporter: Todd Lipcon
>            Priority: Blocker
>             Fix For: 0.21.0
>
>
> Never seen this prior to the new meta prefetch stuff. Saw it tonight on a YCSB run after about an hour.
> Exception in thread "Thread-9" java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at java.util.Hashtable.rehash(Hashtable.java:356)
>         at java.util.Hashtable.put(Hashtable.java:412)
>         at java.util.Properties.setProperty(Properties.java:143)
>         at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1337)
>         at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1227)
>         at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1156)
>         at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:1198)
>         at org.apache.hadoop.hbase.HBaseConfiguration.hashCode(HBaseConfiguration.java:112)
>         at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:121)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:130)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:99)
>         at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.prefetchRegionCache(HConnectionManager.java:733)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:784)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:678)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfPuts(HConnectionManager.java:1424)
>         at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:660)
>         at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:545)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HBASE-2761) GC overhead limit exceeded in client

Posted by "Jonathan Gray (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880888#action_12880888 ] 

Jonathan Gray commented on HBASE-2761:
--------------------------------------

You happen to look at the heap dump?  How much memory were you running the client with and how many regions in the table?

> GC overhead limit exceeded in client
> ------------------------------------
>
>                 Key: HBASE-2761
>                 URL: https://issues.apache.org/jira/browse/HBASE-2761
>             Project: HBase
>          Issue Type: Bug
>          Components: client
>    Affects Versions: 0.21.0
>            Reporter: Todd Lipcon
>            Priority: Blocker
>             Fix For: 0.21.0
>
>
> Never seen this prior to the new meta prefetch stuff. Saw it tonight on a YCSB run after about an hour.
> Exception in thread "Thread-9" java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at java.util.Hashtable.rehash(Hashtable.java:356)
>         at java.util.Hashtable.put(Hashtable.java:412)
>         at java.util.Properties.setProperty(Properties.java:143)
>         at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1337)
>         at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1227)
>         at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1156)
>         at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:1198)
>         at org.apache.hadoop.hbase.HBaseConfiguration.hashCode(HBaseConfiguration.java:112)
>         at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:121)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:130)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:99)
>         at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.prefetchRegionCache(HConnectionManager.java:733)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:784)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:678)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfPuts(HConnectionManager.java:1424)
>         at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:660)
>         at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:545)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HBASE-2761) GC overhead limit exceeded in client

Posted by "Todd Lipcon (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880896#action_12880896 ] 

Todd Lipcon commented on HBASE-2761:
------------------------------------

er.. that should be HBASE-2756...

> GC overhead limit exceeded in client
> ------------------------------------
>
>                 Key: HBASE-2761
>                 URL: https://issues.apache.org/jira/browse/HBASE-2761
>             Project: HBase
>          Issue Type: Bug
>          Components: client
>    Affects Versions: 0.21.0
>            Reporter: Todd Lipcon
>            Priority: Blocker
>             Fix For: 0.21.0
>
>
> Never seen this prior to the new meta prefetch stuff. Saw it tonight on a YCSB run after about an hour.
> Exception in thread "Thread-9" java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at java.util.Hashtable.rehash(Hashtable.java:356)
>         at java.util.Hashtable.put(Hashtable.java:412)
>         at java.util.Properties.setProperty(Properties.java:143)
>         at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1337)
>         at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1227)
>         at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1156)
>         at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:1198)
>         at org.apache.hadoop.hbase.HBaseConfiguration.hashCode(HBaseConfiguration.java:112)
>         at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:121)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:130)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:99)
>         at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.prefetchRegionCache(HConnectionManager.java:733)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:784)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:678)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfPuts(HConnectionManager.java:1424)
>         at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:660)
>         at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:545)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HBASE-2761) GC overhead limit exceeded in client

Posted by "Jonathan Gray (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880893#action_12880893 ] 

Jonathan Gray commented on HBASE-2761:
--------------------------------------

Right, but GC overhead limit is kinda like an out of memory, you spent too much time trying to evict memory.  There could be some CPU starvation I suppose but in the past I've seen similar things trigger either one of those messages.

The stack trace is odd.  Does seem to be rebuilding a new configuration and each Configuration is making a new hash table.  Are the current prefetch tests sufficient?  I guess there's a bug in that we're not reusing the existing Configuration in any case.

> GC overhead limit exceeded in client
> ------------------------------------
>
>                 Key: HBASE-2761
>                 URL: https://issues.apache.org/jira/browse/HBASE-2761
>             Project: HBase
>          Issue Type: Bug
>          Components: client
>    Affects Versions: 0.21.0
>            Reporter: Todd Lipcon
>            Priority: Blocker
>             Fix For: 0.21.0
>
>
> Never seen this prior to the new meta prefetch stuff. Saw it tonight on a YCSB run after about an hour.
> Exception in thread "Thread-9" java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at java.util.Hashtable.rehash(Hashtable.java:356)
>         at java.util.Hashtable.put(Hashtable.java:412)
>         at java.util.Properties.setProperty(Properties.java:143)
>         at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1337)
>         at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1227)
>         at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1156)
>         at org.apache.hadoop.conf.Configuration.iterator(Configuration.java:1198)
>         at org.apache.hadoop.hbase.HBaseConfiguration.hashCode(HBaseConfiguration.java:112)
>         at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:121)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:130)
>         at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:99)
>         at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:102)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.prefetchRegionCache(HConnectionManager.java:733)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:784)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:678)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfPuts(HConnectionManager.java:1424)
>         at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:660)
>         at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:545)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.