You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ambari.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2016/07/26 13:15:20 UTC

[jira] [Commented] (AMBARI-17893) HBase OOM while booting

    [ https://issues.apache.org/jira/browse/AMBARI-17893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393767#comment-15393767 ] 

Hudson commented on AMBARI-17893:
---------------------------------

FAILURE: Integrated in Ambari-trunk-Commit #5388 (See [https://builds.apache.org/job/Ambari-trunk-Commit/5388/])
AMBARI-17893. HBase OOM while booting.(vbrodetskyi) (vbrodetskyi: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=bcef91c8d985c134c04a923491d7ff3e2c6f915f])
* ambari-server/src/main/resources/stacks/HDP/2.2/services/HBASE/configuration/hbase-env.xml
* ambari-server/src/main/resources/stacks/HDP/2.3/services/HBASE/configuration/hbase-env.xml


> HBase OOM while booting
> -----------------------
>
>                 Key: AMBARI-17893
>                 URL: https://issues.apache.org/jira/browse/AMBARI-17893
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>    Affects Versions: 2.4.0
>            Reporter: Vitaly Brodetskyi
>            Assignee: Vitaly Brodetskyi
>            Priority: Critical
>             Fix For: 2.4.0
>
>         Attachments: AMBARI-17893.patch
>
>
> Caused by: java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658)
> at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at org.apache.hadoop.hbase.util.ByteBufferArray.<init>(ByteBufferArray.java:65)
> at org.apache.hadoop.hbase.io.hfile.bucket.ByteBufferIOEngine.<init>(ByteBufferIOEngine.java:47)
> at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getIOEngineFromName(BucketCache.java:307)
> at org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.<init>(BucketCache.java:217)
> at org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:614)
> at org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:553)
> at org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:637)
> at org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:231)
> at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:561)
> at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:410)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2507)
> This issue has been resolved by adding below config entry in hbase-env.
> export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS
> {% if hbase_max_direct_memory_size %}
> -XX:MaxDirectMemorySize={{hbase_max_direct_memory_size}}m
> {% endif %}
> "



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)