You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Rui Xing <xi...@gmail.com> on 2008/10/16 14:18:20 UTC

out of memory error

Hello List,

We encountered an out-of-memory error in data loading. We have 5 data nodes
and 1 name node distributed on 6 machines. Block-level compression was used.
Following is the log output. Seems the problem was caused in compression. Is
there anybody who ever experienced such error? Any helps or clues are
appreciated.

2008-10-15 21:44:33,069 FATAL [regionserver/0:0:0:0:0:0:0:0:60020.compactor]
regionserver.HRegionServer$1(579): Set stop flag in
regionserver/0:0:0:0:0:0:0:0:60020.compactor
java.lang.OutOfMemoryError
        at sun.misc.Unsafe.allocateMemory(Native Method)
        at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:99)
        at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
        at
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.<init>(ZlibDecompressor.java:108)
        at
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.<init>(ZlibDecompressor.java:115)
        at
org.apache.hadoop.io.compress.zlib.ZlibFactory.getZlibDecompressor(ZlibFactory.java:104)
        at
org.apache.hadoop.io.compress.DefaultCodec.createDecompressor(DefaultCodec.java:80)
        at
org.apache.hadoop.io.SequenceFile$Reader.getPooledOrNewDecompressor(SequenceFile.java:1458)
        at
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1543)
        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1442)
        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1431)
        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1426)
        at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:292)
        at
org.apache.hadoop.hbase.regionserver.HStoreFile$HbaseMapFile$HbaseReader.<init>(HStoreFile.java:635)
        at
org.apache.hadoop.hbase.regionserver.HStoreFile$BloomFilterMapFile$Reader.<init>(HStoreFile.java:717)
        at
org.apache.hadoop.hbase.regionserver.HStoreFile$HalfMapFileReader.<init>(HStoreFile.java:915)
        at
org.apache.hadoop.hbase.regionserver.HStoreFile.getReader(HStoreFile.java:408)
        at
org.apache.hadoop.hbase.regionserver.HStore.<init>(HStore.java:263)
        at
org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1698)
        at
org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:481)
        at
org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:421)
        at
org.apache.hadoop.hbase.regionserver.HRegion.splitRegion(HRegion.java:815)
        at
org.apache.hadoop.hbase.regionserver.CompactSplitThread.split(CompactSplitThread.java:133)
        at
org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:86)
2008-10-15 21:44:33,661 FATAL
[regionserver/0:0:0:0:0:0:0:0:60020.cacheFlusher] regionserver.Flusher(183):
Replay of hlog required. Forcing server restart
org.apache.hadoop.hbase.DroppedSnapshotException: region:
p4p_test,,1224072139042
        at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1087)
        at
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:985)
        at
org.apache.hadoop.hbase.regionserver.Flusher.flushRegion(Flusher.java:174)
        at org.apache.hadoop.hbase.regionserver.Flusher.run(Flusher.java:91)
Caused by: java.lang.OutOfMemoryError
        at sun.misc.Unsafe.allocateMemory(Native Method)
        at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:99)
        at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
        at
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.<init>(ZlibDecompressor.java:107)
        at
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.<init>(ZlibDecompressor.java:115)
        at
org.apache.hadoop.io.compress.zlib.ZlibFactory.getZlibDecompressor(ZlibFactory.java:104)
        at
org.apache.hadoop.io.compress.DefaultCodec.createDecompressor(DefaultCodec.java:80)
        at
org.apache.hadoop.io.SequenceFile$Reader.getPooledOrNewDecompressor(SequenceFile.java:1458)
        at
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1555)
        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1442)
        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1431)
        at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1426)
        at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:292)
        at
org.apache.hadoop.hbase.regionserver.HStoreFile$HbaseMapFile$HbaseReader.<init>(HStoreFile.java:635)
        at
org.apache.hadoop.hbase.regionserver.HStoreFile$BloomFilterMapFile$Reader.<init>(HStoreFile.java:717)
        at
org.apache.hadoop.hbase.regionserver.HStoreFile.getReader(HStoreFile.java:413)
        at
org.apache.hadoop.hbase.regionserver.HStore.updateReaders(HStore.java:665)
        at
org.apache.hadoop.hbase.regionserver.HStore.internalFlushCache(HStore.java:640)
        at
org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:577)
        at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1074)
        ... 3 more
2008-10-15 21:44:33,661 INFO
[regionserver/0:0:0:0:0:0:0:0:60020.cacheFlusher] regionserver.Flusher(109):
regionserver/0:0:0:0:0:0:0:0:60020.cacheFlusher exiting
2008-10-15 21:44:33,665 DEBUG [regionserver/0:0:0:0:0:0:0:0:60020.logRoller]
regionserver.HLog(236): Closing current log writer
/hbase/log_172.19.139.3_1224070931944_60020/hlog.dat.1224078265898
2008-10-15 21:44:33,667 INFO  [regionserver/0:0:0:0:0:0:0:0:60020.logRoller]
regionserver.HLog(249): New log writer created at
/hbase/log_172.19.139.3_1224070931944_60020/hlog.dat.1224078273665
2008-10-15 21:44:33,667 INFO  [regionserver/0:0:0:0:0:0:0:0:60020.logRoller]
regionserver.LogRoller(93): LogRoller exiting.
2008-10-15 21:44:34,910 DEBUG [regionserver/0:0:0:0:0:0:0:0:60020]
hbase.RegionHistorian(316): Offlined
2008-10-15 21:44:34,911 INFO  [regionserver/0:0:0:0:0:0:0:0:60020]
regionserver.HRegionServer(432): Stopping infoServer
2008-10-15 21:44:34,911 INFO  [Acceptor ServerSocket[addr=
0.0.0.0/0.0.0.0,port=0,localport=60030]] util.ThreadedServer$Acceptor(656):
Stopping Acceptor ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=60030]
2008-10-15 21:44:34,914 INFO  [regionserver/0:0:0:0:0:0:0:0:60020]
http.SocketListener(212): Stopped SocketListener on 0.0.0.0:60030

Thanks,
-Ray

Re: out of memory error

Posted by Rong-en Fan <gr...@gmail.com>.
On Thu, Oct 16, 2008 at 8:18 PM, Rui Xing <xi...@gmail.com> wrote:
> Hello List,
>
> We encountered an out-of-memory error in data loading. We have 5 data nodes
> and 1 name node distributed on 6 machines. Block-level compression was used.
> Following is the log output. Seems the problem was caused in compression. Is
> there anybody who ever experienced such error? Any helps or clues are
> appreciated.

Hmm... which HBase and Hadoop version are you using?

Regards,
Rong-En Fan