You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by 陈加俊 <cj...@gmail.com> on 2011/04/12 02:35:12 UTC

too many regions cause OME ?

Is it too many regions ? Is the memory enough ?
HBase-0.20.6

2011-04-12 00:16:31,844 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: OutOfMemoryError,
aborting.
java.lang.OutOfMemoryError: Java heap space
        at java.io.BufferedInputStream.<init>(BufferedInputStream.java:178)
        at
org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1380)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1648)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
        at java.io.DataInputStream.readFully(DataInputStream.java:178)
        at java.io.DataInputStream.readFully(DataInputStream.java:152)
        at
org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
        at
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
        at
org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
        at
org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
        at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
        at
org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1636)
        at
org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:321)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1566)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1533)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1453)
        at java.lang.Thread.run(Thread.java:619)
2011-04-12 00:16:31,847 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
request=0.0, regions=1624, stores=3075, storefiles=2526,
storefileIndexSize=374, memstoreSize=339, compactionQueueSize=0,
usedHeap=3990, maxHeap=3991, blockCacheSize=706107016,
blockCacheFree=131010552, blockCacheCount=10631, blockCacheHitRatio=58,
fsReadLatency=0, fsWriteLatency=0, fsSyncLatency=0
2011-04-12 00:16:31,848 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: worker thread exiting


-- 
Thanks & Best regards
jiajun

Re: too many regions cause OME ?

Posted by Stack <st...@duboce.net>.
Does the RegionServer OOME opening same file each time it  happens?
If so, something is up with the file.  Move it aside to get your
cluster online and then lets try and figure whats in this file that
brings on OOME -- a very large record or perhaps a corruption?

St.Ack

On Mon, Apr 11, 2011 at 9:45 PM, 陈加俊 <cj...@gmail.com> wrote:
> I wan to know why openRegion can case the heap OME ,how to calculate the
> size of heap space .
>
> On Tue, Apr 12, 2011 at 10:49 AM, Jean-Daniel Cryans <jd...@apache.org>wrote:
>
>> Were they opening the same region by any chance?
>>
>> On Mon, Apr 11, 2011 at 6:13 PM, 陈加俊 <cj...@gmail.com> wrote:
>> > There is no big scan,and just norma load. Also strange is when one RS
>> exited
>> > then another RS exited and others RS like that.
>> >
>> > On Tue, Apr 12, 2011 at 8:55 AM, Jean-Daniel Cryans <jdcryans@apache.org
>> >wrote:
>> >
>> >> Ok that looks "fine", did the region server die under heavy load by
>> >> any chance? Or was it big scans? Or just normal load?
>> >>
>> >> J-D
>> >>
>> >> On Mon, Apr 11, 2011 at 5:50 PM, 陈加俊 <cj...@gmail.com> wrote:
>> >> > my configuration is follows:
>> >> >  <property>
>> >> >                 <name>hbase.client.write.buffer</name>
>> >> >                 <value>2097152</value>
>> >> >                 <description>
>> >> >                         1024*1024*2=2097152
>> >> >                 </description>
>> >> >         </property>
>> >> >         <property>
>> >> >                 <name>hbase.hstore.blockingStoreFiles</name>
>> >> >                 <value>14</value>
>> >> >         </property>
>> >> >         <property>
>> >> >                 <name>hbase.hregion.memstore.block.multiplier</name>
>> >> >                 <value>2</value>
>> >> >         </property>
>> >> >         <property>
>> >> >                 <name>hbase.regionserver.handler.count</name>
>> >> >                 <value>30</value>
>> >> >         </property>
>> >> >  <property>
>> >> >                 <name>zookeeper.session.timeout</name>
>> >> >                 <value>90000</value>
>> >> >         </property>
>> >> >         <property>
>> >> >                  <name>hbase.zookeeper.property.tickTime</name>
>> >> >                  <value>9000</value>
>> >> >         </property>
>> >> >         <property>
>> >> >                  <name>hbase.hregion.max.filesize</name>
>> >> >                  <value>1073741824</value>
>> >> >                 <description>
>> >> >                 Maximum HStoreFile size. If any one of a column
>> families'
>> >> > HStoreFiles has grown to exceed this value,
>> >> >                 the hosting HRegion is split in two. Default: 256M.
>> >> >                 1024*1024*1024=1073741824 (1G)
>> >> >                 </description>
>> >> >         </property>
>> >> >
>> >> > On Tue, Apr 12, 2011 at 8:43 AM, Jean-Daniel Cryans <
>> jdcryans@apache.org
>> >> >
>> >> > wrote:
>> >> >>
>> >> >> And where will they go? The issue isn't the number of regions per
>> say,
>> >> >> it's the amount of data being served by that region server. Also I
>> >> >> still don't know if that's really your issue or it's a configuration
>> >> >> issue (which I have yet to see).
>> >> >>
>> >> >> J-D
>> >> >>
>> >> >> On Mon, Apr 11, 2011 at 5:41 PM, 陈加俊 <cj...@gmail.com> wrote:
>> >> >> > Can I limit the numbers of regions  on one RegionServer ?
>> >> >> >
>> >> >> > On Tue, Apr 12, 2011 at 8:37 AM, Jean-Daniel Cryans
>> >> >> > <jd...@apache.org>wrote:
>> >> >> >
>> >> >> >> It's really a lot yes, but it could also be weird configurations
>> or
>> >> >> >> too big values.
>> >> >> >>
>> >> >> >> J-D
>> >> >> >>
>> >> >> >> On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 <cj...@gmail.com>
>> wrote:
>> >> >> >> > Is it too many regions ? Is the memory enough ?
>> >> >> >> > HBase-0.20.6
>> >> >> >> >
>> >> >> >> > 2011-04-12 00:16:31,844 FATAL
>> >> >> >> > org.apache.hadoop.hbase.regionserver.HRegionServer:
>> >> OutOfMemoryError,
>> >> >> >> > aborting.
>> >> >> >> > java.lang.OutOfMemoryError: Java heap space
>> >> >> >> >        at
>> >> >> >> java.io.BufferedInputStream.<init>(BufferedInputStream.java:178)
>> >> >> >> >        at
>> >> >> >> >
>> >> >> >>
>> >> >> >>
>> >>
>> org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1380)
>> >> >> >> >        at
>> >> >> >> >
>> >> >> >>
>> >> >> >>
>> >>
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1648)
>> >> >> >> >        at
>> >> >> >> >
>> >> >> >> >
>> >>
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>> >> >> >> >        at
>> >> java.io.DataInputStream.readFully(DataInputStream.java:178)
>> >> >> >> >        at
>> >> java.io.DataInputStream.readFully(DataInputStream.java:152)
>> >> >> >> >        at
>> >> >> >> >
>> >> >> >>
>> >> >> >>
>> >>
>> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
>> >> >> >> >        at
>> >> >> >> >
>> >> >> >> >
>> >>
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
>> >> >> >> >        at
>> >> >> >> >
>> >> >> >>
>> >> >> >>
>> >>
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
>> >> >> >> >        at
>> >> >> >> >
>> >> >> >> >
>> >> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
>> >> >> >> >        at
>> >> >> >> >
>> >> >> >> >
>> >>
>> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
>> >> >> >> >        at
>> >> >> >> >
>> >> >> >> >
>> >>
>> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>> >> >> >> >        at
>> >> >> >> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>> >> >> >> >        at
>> >> >> >> >
>> >> >> >>
>> >> >> >>
>> >>
>> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1636)
>> >> >> >> >        at
>> >> >> >> >
>> >> >> >> >
>> >>
>> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:321)
>> >> >> >> >        at
>> >> >> >> >
>> >> >> >>
>> >> >> >>
>> >>
>> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1566)
>> >> >> >> >        at
>> >> >> >> >
>> >> >> >>
>> >> >> >>
>> >>
>> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1533)
>> >> >> >> >        at
>> >> >> >> >
>> >> >> >>
>> >> >> >>
>> >>
>> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1453)
>> >> >> >> >        at java.lang.Thread.run(Thread.java:619)
>> >> >> >> > 2011-04-12 00:16:31,847 INFO
>> >> >> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of
>> >> metrics:
>> >> >> >> > request=0.0, regions=1624, stores=3075, storefiles=2526,
>> >> >> >> > storefileIndexSize=374, memstoreSize=339, compactionQueueSize=0,
>> >> >> >> > usedHeap=3990, maxHeap=3991, blockCacheSize=706107016,
>> >> >> >> > blockCacheFree=131010552, blockCacheCount=10631,
>> >> >> >> > blockCacheHitRatio=58,
>> >> >> >> > fsReadLatency=0, fsWriteLatency=0, fsSyncLatency=0
>> >> >> >> > 2011-04-12 00:16:31,848 INFO
>> >> >> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: worker
>> thread
>> >> >> >> > exiting
>> >> >> >> >
>> >> >> >> >
>> >> >> >> > --
>> >> >> >> > Thanks & Best regards
>> >> >> >> > jiajun
>> >> >> >> >
>> >> >> >>
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > --
>> >> >> > Thanks & Best regards
>> >> >> > jiajun
>> >> >> >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Thanks & Best regards
>> >> > jiajun
>> >> >
>> >> >
>> >>
>> >
>> >
>> >
>> > --
>> > Thanks & Best regards
>> > jiajun
>> >
>>
>
>
>
> --
> Thanks & Best regards
> jiajun
>

Re: too many regions cause OME ?

Posted by 陈加俊 <cj...@gmail.com>.
I wan to know why openRegion can case the heap OME ,how to calculate the
size of heap space .

On Tue, Apr 12, 2011 at 10:49 AM, Jean-Daniel Cryans <jd...@apache.org>wrote:

> Were they opening the same region by any chance?
>
> On Mon, Apr 11, 2011 at 6:13 PM, 陈加俊 <cj...@gmail.com> wrote:
> > There is no big scan,and just norma load. Also strange is when one RS
> exited
> > then another RS exited and others RS like that.
> >
> > On Tue, Apr 12, 2011 at 8:55 AM, Jean-Daniel Cryans <jdcryans@apache.org
> >wrote:
> >
> >> Ok that looks "fine", did the region server die under heavy load by
> >> any chance? Or was it big scans? Or just normal load?
> >>
> >> J-D
> >>
> >> On Mon, Apr 11, 2011 at 5:50 PM, 陈加俊 <cj...@gmail.com> wrote:
> >> > my configuration is follows:
> >> >  <property>
> >> >                 <name>hbase.client.write.buffer</name>
> >> >                 <value>2097152</value>
> >> >                 <description>
> >> >                         1024*1024*2=2097152
> >> >                 </description>
> >> >         </property>
> >> >         <property>
> >> >                 <name>hbase.hstore.blockingStoreFiles</name>
> >> >                 <value>14</value>
> >> >         </property>
> >> >         <property>
> >> >                 <name>hbase.hregion.memstore.block.multiplier</name>
> >> >                 <value>2</value>
> >> >         </property>
> >> >         <property>
> >> >                 <name>hbase.regionserver.handler.count</name>
> >> >                 <value>30</value>
> >> >         </property>
> >> >  <property>
> >> >                 <name>zookeeper.session.timeout</name>
> >> >                 <value>90000</value>
> >> >         </property>
> >> >         <property>
> >> >                  <name>hbase.zookeeper.property.tickTime</name>
> >> >                  <value>9000</value>
> >> >         </property>
> >> >         <property>
> >> >                  <name>hbase.hregion.max.filesize</name>
> >> >                  <value>1073741824</value>
> >> >                 <description>
> >> >                 Maximum HStoreFile size. If any one of a column
> families'
> >> > HStoreFiles has grown to exceed this value,
> >> >                 the hosting HRegion is split in two. Default: 256M.
> >> >                 1024*1024*1024=1073741824 (1G)
> >> >                 </description>
> >> >         </property>
> >> >
> >> > On Tue, Apr 12, 2011 at 8:43 AM, Jean-Daniel Cryans <
> jdcryans@apache.org
> >> >
> >> > wrote:
> >> >>
> >> >> And where will they go? The issue isn't the number of regions per
> say,
> >> >> it's the amount of data being served by that region server. Also I
> >> >> still don't know if that's really your issue or it's a configuration
> >> >> issue (which I have yet to see).
> >> >>
> >> >> J-D
> >> >>
> >> >> On Mon, Apr 11, 2011 at 5:41 PM, 陈加俊 <cj...@gmail.com> wrote:
> >> >> > Can I limit the numbers of regions  on one RegionServer ?
> >> >> >
> >> >> > On Tue, Apr 12, 2011 at 8:37 AM, Jean-Daniel Cryans
> >> >> > <jd...@apache.org>wrote:
> >> >> >
> >> >> >> It's really a lot yes, but it could also be weird configurations
> or
> >> >> >> too big values.
> >> >> >>
> >> >> >> J-D
> >> >> >>
> >> >> >> On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 <cj...@gmail.com>
> wrote:
> >> >> >> > Is it too many regions ? Is the memory enough ?
> >> >> >> > HBase-0.20.6
> >> >> >> >
> >> >> >> > 2011-04-12 00:16:31,844 FATAL
> >> >> >> > org.apache.hadoop.hbase.regionserver.HRegionServer:
> >> OutOfMemoryError,
> >> >> >> > aborting.
> >> >> >> > java.lang.OutOfMemoryError: Java heap space
> >> >> >> >        at
> >> >> >> java.io.BufferedInputStream.<init>(BufferedInputStream.java:178)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1380)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1648)
> >> >> >> >        at
> >> >> >> >
> >> >> >> >
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >> >> >> >        at
> >> java.io.DataInputStream.readFully(DataInputStream.java:178)
> >> >> >> >        at
> >> java.io.DataInputStream.readFully(DataInputStream.java:152)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
> >> >> >> >        at
> >> >> >> >
> >> >> >> >
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
> >> >> >> >        at
> >> >> >> >
> >> >> >> >
> >> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
> >> >> >> >        at
> >> >> >> >
> >> >> >> >
> >>
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
> >> >> >> >        at
> >> >> >> >
> >> >> >> >
> >>
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >> >> >> >        at
> >> >> >> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1636)
> >> >> >> >        at
> >> >> >> >
> >> >> >> >
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:321)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1566)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1533)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1453)
> >> >> >> >        at java.lang.Thread.run(Thread.java:619)
> >> >> >> > 2011-04-12 00:16:31,847 INFO
> >> >> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of
> >> metrics:
> >> >> >> > request=0.0, regions=1624, stores=3075, storefiles=2526,
> >> >> >> > storefileIndexSize=374, memstoreSize=339, compactionQueueSize=0,
> >> >> >> > usedHeap=3990, maxHeap=3991, blockCacheSize=706107016,
> >> >> >> > blockCacheFree=131010552, blockCacheCount=10631,
> >> >> >> > blockCacheHitRatio=58,
> >> >> >> > fsReadLatency=0, fsWriteLatency=0, fsSyncLatency=0
> >> >> >> > 2011-04-12 00:16:31,848 INFO
> >> >> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: worker
> thread
> >> >> >> > exiting
> >> >> >> >
> >> >> >> >
> >> >> >> > --
> >> >> >> > Thanks & Best regards
> >> >> >> > jiajun
> >> >> >> >
> >> >> >>
> >> >> >
> >> >> >
> >> >> >
> >> >> > --
> >> >> > Thanks & Best regards
> >> >> > jiajun
> >> >> >
> >> >
> >> >
> >> >
> >> > --
> >> > Thanks & Best regards
> >> > jiajun
> >> >
> >> >
> >>
> >
> >
> >
> > --
> > Thanks & Best regards
> > jiajun
> >
>



-- 
Thanks & Best regards
jiajun

Re: too many regions cause OME ?

Posted by 陈加俊 <cj...@gmail.com>.
Yes ,I scan (or get or put)  rows always .

On Tue, Apr 12, 2011 at 10:49 AM, Jean-Daniel Cryans <jd...@apache.org>wrote:

> Were they opening the same region by any chance?
>
> On Mon, Apr 11, 2011 at 6:13 PM, 陈加俊 <cj...@gmail.com> wrote:
> > There is no big scan,and just norma load. Also strange is when one RS
> exited
> > then another RS exited and others RS like that.
> >
> > On Tue, Apr 12, 2011 at 8:55 AM, Jean-Daniel Cryans <jdcryans@apache.org
> >wrote:
> >
> >> Ok that looks "fine", did the region server die under heavy load by
> >> any chance? Or was it big scans? Or just normal load?
> >>
> >> J-D
> >>
> >> On Mon, Apr 11, 2011 at 5:50 PM, 陈加俊 <cj...@gmail.com> wrote:
> >> > my configuration is follows:
> >> >  <property>
> >> >                 <name>hbase.client.write.buffer</name>
> >> >                 <value>2097152</value>
> >> >                 <description>
> >> >                         1024*1024*2=2097152
> >> >                 </description>
> >> >         </property>
> >> >         <property>
> >> >                 <name>hbase.hstore.blockingStoreFiles</name>
> >> >                 <value>14</value>
> >> >         </property>
> >> >         <property>
> >> >                 <name>hbase.hregion.memstore.block.multiplier</name>
> >> >                 <value>2</value>
> >> >         </property>
> >> >         <property>
> >> >                 <name>hbase.regionserver.handler.count</name>
> >> >                 <value>30</value>
> >> >         </property>
> >> >  <property>
> >> >                 <name>zookeeper.session.timeout</name>
> >> >                 <value>90000</value>
> >> >         </property>
> >> >         <property>
> >> >                  <name>hbase.zookeeper.property.tickTime</name>
> >> >                  <value>9000</value>
> >> >         </property>
> >> >         <property>
> >> >                  <name>hbase.hregion.max.filesize</name>
> >> >                  <value>1073741824</value>
> >> >                 <description>
> >> >                 Maximum HStoreFile size. If any one of a column
> families'
> >> > HStoreFiles has grown to exceed this value,
> >> >                 the hosting HRegion is split in two. Default: 256M.
> >> >                 1024*1024*1024=1073741824 (1G)
> >> >                 </description>
> >> >         </property>
> >> >
> >> > On Tue, Apr 12, 2011 at 8:43 AM, Jean-Daniel Cryans <
> jdcryans@apache.org
> >> >
> >> > wrote:
> >> >>
> >> >> And where will they go? The issue isn't the number of regions per
> say,
> >> >> it's the amount of data being served by that region server. Also I
> >> >> still don't know if that's really your issue or it's a configuration
> >> >> issue (which I have yet to see).
> >> >>
> >> >> J-D
> >> >>
> >> >> On Mon, Apr 11, 2011 at 5:41 PM, 陈加俊 <cj...@gmail.com> wrote:
> >> >> > Can I limit the numbers of regions  on one RegionServer ?
> >> >> >
> >> >> > On Tue, Apr 12, 2011 at 8:37 AM, Jean-Daniel Cryans
> >> >> > <jd...@apache.org>wrote:
> >> >> >
> >> >> >> It's really a lot yes, but it could also be weird configurations
> or
> >> >> >> too big values.
> >> >> >>
> >> >> >> J-D
> >> >> >>
> >> >> >> On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 <cj...@gmail.com>
> wrote:
> >> >> >> > Is it too many regions ? Is the memory enough ?
> >> >> >> > HBase-0.20.6
> >> >> >> >
> >> >> >> > 2011-04-12 00:16:31,844 FATAL
> >> >> >> > org.apache.hadoop.hbase.regionserver.HRegionServer:
> >> OutOfMemoryError,
> >> >> >> > aborting.
> >> >> >> > java.lang.OutOfMemoryError: Java heap space
> >> >> >> >        at
> >> >> >> java.io.BufferedInputStream.<init>(BufferedInputStream.java:178)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1380)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1648)
> >> >> >> >        at
> >> >> >> >
> >> >> >> >
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >> >> >> >        at
> >> java.io.DataInputStream.readFully(DataInputStream.java:178)
> >> >> >> >        at
> >> java.io.DataInputStream.readFully(DataInputStream.java:152)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
> >> >> >> >        at
> >> >> >> >
> >> >> >> >
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
> >> >> >> >        at
> >> >> >> >
> >> >> >> >
> >> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
> >> >> >> >        at
> >> >> >> >
> >> >> >> >
> >>
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
> >> >> >> >        at
> >> >> >> >
> >> >> >> >
> >>
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >> >> >> >        at
> >> >> >> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1636)
> >> >> >> >        at
> >> >> >> >
> >> >> >> >
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:321)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1566)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1533)
> >> >> >> >        at
> >> >> >> >
> >> >> >>
> >> >> >>
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1453)
> >> >> >> >        at java.lang.Thread.run(Thread.java:619)
> >> >> >> > 2011-04-12 00:16:31,847 INFO
> >> >> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of
> >> metrics:
> >> >> >> > request=0.0, regions=1624, stores=3075, storefiles=2526,
> >> >> >> > storefileIndexSize=374, memstoreSize=339, compactionQueueSize=0,
> >> >> >> > usedHeap=3990, maxHeap=3991, blockCacheSize=706107016,
> >> >> >> > blockCacheFree=131010552, blockCacheCount=10631,
> >> >> >> > blockCacheHitRatio=58,
> >> >> >> > fsReadLatency=0, fsWriteLatency=0, fsSyncLatency=0
> >> >> >> > 2011-04-12 00:16:31,848 INFO
> >> >> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: worker
> thread
> >> >> >> > exiting
> >> >> >> >
> >> >> >> >
> >> >> >> > --
> >> >> >> > Thanks & Best regards
> >> >> >> > jiajun
> >> >> >> >
> >> >> >>
> >> >> >
> >> >> >
> >> >> >
> >> >> > --
> >> >> > Thanks & Best regards
> >> >> > jiajun
> >> >> >
> >> >
> >> >
> >> >
> >> > --
> >> > Thanks & Best regards
> >> > jiajun
> >> >
> >> >
> >>
> >
> >
> >
> > --
> > Thanks & Best regards
> > jiajun
> >
>



-- 
Thanks & Best regards
jiajun

Re: too many regions cause OME ?

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Were they opening the same region by any chance?

On Mon, Apr 11, 2011 at 6:13 PM, 陈加俊 <cj...@gmail.com> wrote:
> There is no big scan,and just norma load. Also strange is when one RS exited
> then another RS exited and others RS like that.
>
> On Tue, Apr 12, 2011 at 8:55 AM, Jean-Daniel Cryans <jd...@apache.org>wrote:
>
>> Ok that looks "fine", did the region server die under heavy load by
>> any chance? Or was it big scans? Or just normal load?
>>
>> J-D
>>
>> On Mon, Apr 11, 2011 at 5:50 PM, 陈加俊 <cj...@gmail.com> wrote:
>> > my configuration is follows:
>> >  <property>
>> >                 <name>hbase.client.write.buffer</name>
>> >                 <value>2097152</value>
>> >                 <description>
>> >                         1024*1024*2=2097152
>> >                 </description>
>> >         </property>
>> >         <property>
>> >                 <name>hbase.hstore.blockingStoreFiles</name>
>> >                 <value>14</value>
>> >         </property>
>> >         <property>
>> >                 <name>hbase.hregion.memstore.block.multiplier</name>
>> >                 <value>2</value>
>> >         </property>
>> >         <property>
>> >                 <name>hbase.regionserver.handler.count</name>
>> >                 <value>30</value>
>> >         </property>
>> >  <property>
>> >                 <name>zookeeper.session.timeout</name>
>> >                 <value>90000</value>
>> >         </property>
>> >         <property>
>> >                  <name>hbase.zookeeper.property.tickTime</name>
>> >                  <value>9000</value>
>> >         </property>
>> >         <property>
>> >                  <name>hbase.hregion.max.filesize</name>
>> >                  <value>1073741824</value>
>> >                 <description>
>> >                 Maximum HStoreFile size. If any one of a column families'
>> > HStoreFiles has grown to exceed this value,
>> >                 the hosting HRegion is split in two. Default: 256M.
>> >                 1024*1024*1024=1073741824 (1G)
>> >                 </description>
>> >         </property>
>> >
>> > On Tue, Apr 12, 2011 at 8:43 AM, Jean-Daniel Cryans <jdcryans@apache.org
>> >
>> > wrote:
>> >>
>> >> And where will they go? The issue isn't the number of regions per say,
>> >> it's the amount of data being served by that region server. Also I
>> >> still don't know if that's really your issue or it's a configuration
>> >> issue (which I have yet to see).
>> >>
>> >> J-D
>> >>
>> >> On Mon, Apr 11, 2011 at 5:41 PM, 陈加俊 <cj...@gmail.com> wrote:
>> >> > Can I limit the numbers of regions  on one RegionServer ?
>> >> >
>> >> > On Tue, Apr 12, 2011 at 8:37 AM, Jean-Daniel Cryans
>> >> > <jd...@apache.org>wrote:
>> >> >
>> >> >> It's really a lot yes, but it could also be weird configurations or
>> >> >> too big values.
>> >> >>
>> >> >> J-D
>> >> >>
>> >> >> On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 <cj...@gmail.com> wrote:
>> >> >> > Is it too many regions ? Is the memory enough ?
>> >> >> > HBase-0.20.6
>> >> >> >
>> >> >> > 2011-04-12 00:16:31,844 FATAL
>> >> >> > org.apache.hadoop.hbase.regionserver.HRegionServer:
>> OutOfMemoryError,
>> >> >> > aborting.
>> >> >> > java.lang.OutOfMemoryError: Java heap space
>> >> >> >        at
>> >> >> java.io.BufferedInputStream.<init>(BufferedInputStream.java:178)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1380)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1648)
>> >> >> >        at
>> >> >> >
>> >> >> >
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>> >> >> >        at
>> java.io.DataInputStream.readFully(DataInputStream.java:178)
>> >> >> >        at
>> java.io.DataInputStream.readFully(DataInputStream.java:152)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
>> >> >> >        at
>> >> >> >
>> >> >> >
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
>> >> >> >        at
>> >> >> >
>> >> >> >
>> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
>> >> >> >        at
>> >> >> >
>> >> >> >
>> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
>> >> >> >        at
>> >> >> >
>> >> >> >
>> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>> >> >> >        at
>> >> >> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1636)
>> >> >> >        at
>> >> >> >
>> >> >> >
>> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:321)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1566)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1533)
>> >> >> >        at
>> >> >> >
>> >> >>
>> >> >>
>> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1453)
>> >> >> >        at java.lang.Thread.run(Thread.java:619)
>> >> >> > 2011-04-12 00:16:31,847 INFO
>> >> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of
>> metrics:
>> >> >> > request=0.0, regions=1624, stores=3075, storefiles=2526,
>> >> >> > storefileIndexSize=374, memstoreSize=339, compactionQueueSize=0,
>> >> >> > usedHeap=3990, maxHeap=3991, blockCacheSize=706107016,
>> >> >> > blockCacheFree=131010552, blockCacheCount=10631,
>> >> >> > blockCacheHitRatio=58,
>> >> >> > fsReadLatency=0, fsWriteLatency=0, fsSyncLatency=0
>> >> >> > 2011-04-12 00:16:31,848 INFO
>> >> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: worker thread
>> >> >> > exiting
>> >> >> >
>> >> >> >
>> >> >> > --
>> >> >> > Thanks & Best regards
>> >> >> > jiajun
>> >> >> >
>> >> >>
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Thanks & Best regards
>> >> > jiajun
>> >> >
>> >
>> >
>> >
>> > --
>> > Thanks & Best regards
>> > jiajun
>> >
>> >
>>
>
>
>
> --
> Thanks & Best regards
> jiajun
>

Re: too many regions cause OME ?

Posted by 陈加俊 <cj...@gmail.com>.
There is no big scan,and just norma load. Also strange is when one RS exited
then another RS exited and others RS like that.

On Tue, Apr 12, 2011 at 8:55 AM, Jean-Daniel Cryans <jd...@apache.org>wrote:

> Ok that looks "fine", did the region server die under heavy load by
> any chance? Or was it big scans? Or just normal load?
>
> J-D
>
> On Mon, Apr 11, 2011 at 5:50 PM, 陈加俊 <cj...@gmail.com> wrote:
> > my configuration is follows:
> >  <property>
> >                 <name>hbase.client.write.buffer</name>
> >                 <value>2097152</value>
> >                 <description>
> >                         1024*1024*2=2097152
> >                 </description>
> >         </property>
> >         <property>
> >                 <name>hbase.hstore.blockingStoreFiles</name>
> >                 <value>14</value>
> >         </property>
> >         <property>
> >                 <name>hbase.hregion.memstore.block.multiplier</name>
> >                 <value>2</value>
> >         </property>
> >         <property>
> >                 <name>hbase.regionserver.handler.count</name>
> >                 <value>30</value>
> >         </property>
> >  <property>
> >                 <name>zookeeper.session.timeout</name>
> >                 <value>90000</value>
> >         </property>
> >         <property>
> >                  <name>hbase.zookeeper.property.tickTime</name>
> >                  <value>9000</value>
> >         </property>
> >         <property>
> >                  <name>hbase.hregion.max.filesize</name>
> >                  <value>1073741824</value>
> >                 <description>
> >                 Maximum HStoreFile size. If any one of a column families'
> > HStoreFiles has grown to exceed this value,
> >                 the hosting HRegion is split in two. Default: 256M.
> >                 1024*1024*1024=1073741824 (1G)
> >                 </description>
> >         </property>
> >
> > On Tue, Apr 12, 2011 at 8:43 AM, Jean-Daniel Cryans <jdcryans@apache.org
> >
> > wrote:
> >>
> >> And where will they go? The issue isn't the number of regions per say,
> >> it's the amount of data being served by that region server. Also I
> >> still don't know if that's really your issue or it's a configuration
> >> issue (which I have yet to see).
> >>
> >> J-D
> >>
> >> On Mon, Apr 11, 2011 at 5:41 PM, 陈加俊 <cj...@gmail.com> wrote:
> >> > Can I limit the numbers of regions  on one RegionServer ?
> >> >
> >> > On Tue, Apr 12, 2011 at 8:37 AM, Jean-Daniel Cryans
> >> > <jd...@apache.org>wrote:
> >> >
> >> >> It's really a lot yes, but it could also be weird configurations or
> >> >> too big values.
> >> >>
> >> >> J-D
> >> >>
> >> >> On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 <cj...@gmail.com> wrote:
> >> >> > Is it too many regions ? Is the memory enough ?
> >> >> > HBase-0.20.6
> >> >> >
> >> >> > 2011-04-12 00:16:31,844 FATAL
> >> >> > org.apache.hadoop.hbase.regionserver.HRegionServer:
> OutOfMemoryError,
> >> >> > aborting.
> >> >> > java.lang.OutOfMemoryError: Java heap space
> >> >> >        at
> >> >> java.io.BufferedInputStream.<init>(BufferedInputStream.java:178)
> >> >> >        at
> >> >> >
> >> >>
> >> >>
> org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1380)
> >> >> >        at
> >> >> >
> >> >>
> >> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1648)
> >> >> >        at
> >> >> >
> >> >> >
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >> >> >        at
> java.io.DataInputStream.readFully(DataInputStream.java:178)
> >> >> >        at
> java.io.DataInputStream.readFully(DataInputStream.java:152)
> >> >> >        at
> >> >> >
> >> >>
> >> >>
> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
> >> >> >        at
> >> >> >
> >> >> >
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
> >> >> >        at
> >> >> >
> >> >>
> >> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
> >> >> >        at
> >> >> >
> >> >> >
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
> >> >> >        at
> >> >> >
> >> >> >
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
> >> >> >        at
> >> >> >
> >> >> >
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >> >> >        at
> >> >> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >> >> >        at
> >> >> >
> >> >>
> >> >>
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1636)
> >> >> >        at
> >> >> >
> >> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:321)
> >> >> >        at
> >> >> >
> >> >>
> >> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1566)
> >> >> >        at
> >> >> >
> >> >>
> >> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1533)
> >> >> >        at
> >> >> >
> >> >>
> >> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1453)
> >> >> >        at java.lang.Thread.run(Thread.java:619)
> >> >> > 2011-04-12 00:16:31,847 INFO
> >> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of
> metrics:
> >> >> > request=0.0, regions=1624, stores=3075, storefiles=2526,
> >> >> > storefileIndexSize=374, memstoreSize=339, compactionQueueSize=0,
> >> >> > usedHeap=3990, maxHeap=3991, blockCacheSize=706107016,
> >> >> > blockCacheFree=131010552, blockCacheCount=10631,
> >> >> > blockCacheHitRatio=58,
> >> >> > fsReadLatency=0, fsWriteLatency=0, fsSyncLatency=0
> >> >> > 2011-04-12 00:16:31,848 INFO
> >> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: worker thread
> >> >> > exiting
> >> >> >
> >> >> >
> >> >> > --
> >> >> > Thanks & Best regards
> >> >> > jiajun
> >> >> >
> >> >>
> >> >
> >> >
> >> >
> >> > --
> >> > Thanks & Best regards
> >> > jiajun
> >> >
> >
> >
> >
> > --
> > Thanks & Best regards
> > jiajun
> >
> >
>



-- 
Thanks & Best regards
jiajun

Re: too many regions cause OME ?

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Ok that looks "fine", did the region server die under heavy load by
any chance? Or was it big scans? Or just normal load?

J-D

On Mon, Apr 11, 2011 at 5:50 PM, 陈加俊 <cj...@gmail.com> wrote:
> my configuration is follows:
>  <property>
>                 <name>hbase.client.write.buffer</name>
>                 <value>2097152</value>
>                 <description>
>                         1024*1024*2=2097152
>                 </description>
>         </property>
>         <property>
>                 <name>hbase.hstore.blockingStoreFiles</name>
>                 <value>14</value>
>         </property>
>         <property>
>                 <name>hbase.hregion.memstore.block.multiplier</name>
>                 <value>2</value>
>         </property>
>         <property>
>                 <name>hbase.regionserver.handler.count</name>
>                 <value>30</value>
>         </property>
>  <property>
>                 <name>zookeeper.session.timeout</name>
>                 <value>90000</value>
>         </property>
>         <property>
>                  <name>hbase.zookeeper.property.tickTime</name>
>                  <value>9000</value>
>         </property>
>         <property>
>                  <name>hbase.hregion.max.filesize</name>
>                  <value>1073741824</value>
>                 <description>
>                 Maximum HStoreFile size. If any one of a column families'
> HStoreFiles has grown to exceed this value,
>                 the hosting HRegion is split in two. Default: 256M.
>                 1024*1024*1024=1073741824 (1G)
>                 </description>
>         </property>
>
> On Tue, Apr 12, 2011 at 8:43 AM, Jean-Daniel Cryans <jd...@apache.org>
> wrote:
>>
>> And where will they go? The issue isn't the number of regions per say,
>> it's the amount of data being served by that region server. Also I
>> still don't know if that's really your issue or it's a configuration
>> issue (which I have yet to see).
>>
>> J-D
>>
>> On Mon, Apr 11, 2011 at 5:41 PM, 陈加俊 <cj...@gmail.com> wrote:
>> > Can I limit the numbers of regions  on one RegionServer ?
>> >
>> > On Tue, Apr 12, 2011 at 8:37 AM, Jean-Daniel Cryans
>> > <jd...@apache.org>wrote:
>> >
>> >> It's really a lot yes, but it could also be weird configurations or
>> >> too big values.
>> >>
>> >> J-D
>> >>
>> >> On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 <cj...@gmail.com> wrote:
>> >> > Is it too many regions ? Is the memory enough ?
>> >> > HBase-0.20.6
>> >> >
>> >> > 2011-04-12 00:16:31,844 FATAL
>> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: OutOfMemoryError,
>> >> > aborting.
>> >> > java.lang.OutOfMemoryError: Java heap space
>> >> >        at
>> >> java.io.BufferedInputStream.<init>(BufferedInputStream.java:178)
>> >> >        at
>> >> >
>> >>
>> >> org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1380)
>> >> >        at
>> >> >
>> >>
>> >> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1648)
>> >> >        at
>> >> >
>> >> > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>> >> >        at java.io.DataInputStream.readFully(DataInputStream.java:178)
>> >> >        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>> >> >        at
>> >> >
>> >>
>> >> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
>> >> >        at
>> >> >
>> >> > org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
>> >> >        at
>> >> >
>> >>
>> >> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
>> >> >        at
>> >> >
>> >> > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
>> >> >        at
>> >> >
>> >> > org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
>> >> >        at
>> >> >
>> >> > org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>> >> >        at
>> >> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>> >> >        at
>> >> >
>> >>
>> >> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1636)
>> >> >        at
>> >> >
>> >> > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:321)
>> >> >        at
>> >> >
>> >>
>> >> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1566)
>> >> >        at
>> >> >
>> >>
>> >> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1533)
>> >> >        at
>> >> >
>> >>
>> >> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1453)
>> >> >        at java.lang.Thread.run(Thread.java:619)
>> >> > 2011-04-12 00:16:31,847 INFO
>> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>> >> > request=0.0, regions=1624, stores=3075, storefiles=2526,
>> >> > storefileIndexSize=374, memstoreSize=339, compactionQueueSize=0,
>> >> > usedHeap=3990, maxHeap=3991, blockCacheSize=706107016,
>> >> > blockCacheFree=131010552, blockCacheCount=10631,
>> >> > blockCacheHitRatio=58,
>> >> > fsReadLatency=0, fsWriteLatency=0, fsSyncLatency=0
>> >> > 2011-04-12 00:16:31,848 INFO
>> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: worker thread
>> >> > exiting
>> >> >
>> >> >
>> >> > --
>> >> > Thanks & Best regards
>> >> > jiajun
>> >> >
>> >>
>> >
>> >
>> >
>> > --
>> > Thanks & Best regards
>> > jiajun
>> >
>
>
>
> --
> Thanks & Best regards
> jiajun
>
>

Re: too many regions cause OME ?

Posted by 陈加俊 <cj...@gmail.com>.
my configuration is follows:
 <property>
                <name>hbase.client.write.buffer</name>
                <value>2097152</value>
                <description>
                        1024*1024*2=2097152
                </description>
        </property>

        <property>
                <name>hbase.hstore.blockingStoreFiles</name>
                <value>14</value>
        </property>

        <property>
                <name>hbase.hregion.memstore.block.multiplier</name>
                <value>2</value>
        </property>

        <property>
                <name>hbase.regionserver.handler.count</name>
                <value>30</value>
        </property>

 <property>
                <name>zookeeper.session.timeout</name>
                <value>90000</value>
        </property>

        <property>
                 <name>hbase.zookeeper.property.tickTime</name>
                 <value>9000</value>
        </property>

        <property>
                 <name>hbase.hregion.max.filesize</name>
                 <value>1073741824</value>
                <description>
                Maximum HStoreFile size. If any one of a column families'
HStoreFiles has grown to exceed this value,
                the hosting HRegion is split in two. Default: 256M.
                1024*1024*1024=1073741824 (1G)
                </description>
        </property>


On Tue, Apr 12, 2011 at 8:43 AM, Jean-Daniel Cryans <jd...@apache.org>wrote:

> And where will they go? The issue isn't the number of regions per say,
> it's the amount of data being served by that region server. Also I
> still don't know if that's really your issue or it's a configuration
> issue (which I have yet to see).
>
> J-D
>
> On Mon, Apr 11, 2011 at 5:41 PM, 陈加俊 <cj...@gmail.com> wrote:
> > Can I limit the numbers of regions  on one RegionServer ?
> >
> > On Tue, Apr 12, 2011 at 8:37 AM, Jean-Daniel Cryans <jdcryans@apache.org
> >wrote:
> >
> >> It's really a lot yes, but it could also be weird configurations or
> >> too big values.
> >>
> >> J-D
> >>
> >> On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 <cj...@gmail.com> wrote:
> >> > Is it too many regions ? Is the memory enough ?
> >> > HBase-0.20.6
> >> >
> >> > 2011-04-12 00:16:31,844 FATAL
> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: OutOfMemoryError,
> >> > aborting.
> >> > java.lang.OutOfMemoryError: Java heap space
> >> >        at
> >> java.io.BufferedInputStream.<init>(BufferedInputStream.java:178)
> >> >        at
> >> >
> >>
> org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1380)
> >> >        at
> >> >
> >>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1648)
> >> >        at
> >> >
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >> >        at java.io.DataInputStream.readFully(DataInputStream.java:178)
> >> >        at java.io.DataInputStream.readFully(DataInputStream.java:152)
> >> >        at
> >> >
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
> >> >        at
> >> >
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
> >> >        at
> >> >
> >>
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
> >> >        at
> >> >
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
> >> >        at
> >> >
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
> >> >        at
> >> >
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >> >        at
> >> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >> >        at
> >> >
> >>
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1636)
> >> >        at
> >> >
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:321)
> >> >        at
> >> >
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1566)
> >> >        at
> >> >
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1533)
> >> >        at
> >> >
> >>
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1453)
> >> >        at java.lang.Thread.run(Thread.java:619)
> >> > 2011-04-12 00:16:31,847 INFO
> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> >> > request=0.0, regions=1624, stores=3075, storefiles=2526,
> >> > storefileIndexSize=374, memstoreSize=339, compactionQueueSize=0,
> >> > usedHeap=3990, maxHeap=3991, blockCacheSize=706107016,
> >> > blockCacheFree=131010552, blockCacheCount=10631,
> blockCacheHitRatio=58,
> >> > fsReadLatency=0, fsWriteLatency=0, fsSyncLatency=0
> >> > 2011-04-12 00:16:31,848 INFO
> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: worker thread
> exiting
> >> >
> >> >
> >> > --
> >> > Thanks & Best regards
> >> > jiajun
> >> >
> >>
> >
> >
> >
> > --
> > Thanks & Best regards
> > jiajun
> >
>



-- 
Thanks & Best regards
jiajun

Re: too many regions cause OME ?

Posted by Jean-Daniel Cryans <jd...@apache.org>.
And where will they go? The issue isn't the number of regions per say,
it's the amount of data being served by that region server. Also I
still don't know if that's really your issue or it's a configuration
issue (which I have yet to see).

J-D

On Mon, Apr 11, 2011 at 5:41 PM, 陈加俊 <cj...@gmail.com> wrote:
> Can I limit the numbers of regions  on one RegionServer ?
>
> On Tue, Apr 12, 2011 at 8:37 AM, Jean-Daniel Cryans <jd...@apache.org>wrote:
>
>> It's really a lot yes, but it could also be weird configurations or
>> too big values.
>>
>> J-D
>>
>> On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 <cj...@gmail.com> wrote:
>> > Is it too many regions ? Is the memory enough ?
>> > HBase-0.20.6
>> >
>> > 2011-04-12 00:16:31,844 FATAL
>> > org.apache.hadoop.hbase.regionserver.HRegionServer: OutOfMemoryError,
>> > aborting.
>> > java.lang.OutOfMemoryError: Java heap space
>> >        at
>> java.io.BufferedInputStream.<init>(BufferedInputStream.java:178)
>> >        at
>> >
>> org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1380)
>> >        at
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1648)
>> >        at
>> > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>> >        at java.io.DataInputStream.readFully(DataInputStream.java:178)
>> >        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>> >        at
>> >
>> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
>> >        at
>> > org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
>> >        at
>> >
>> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
>> >        at
>> > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
>> >        at
>> > org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
>> >        at
>> > org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>> >        at
>> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>> >        at
>> >
>> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1636)
>> >        at
>> > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:321)
>> >        at
>> >
>> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1566)
>> >        at
>> >
>> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1533)
>> >        at
>> >
>> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1453)
>> >        at java.lang.Thread.run(Thread.java:619)
>> > 2011-04-12 00:16:31,847 INFO
>> > org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>> > request=0.0, regions=1624, stores=3075, storefiles=2526,
>> > storefileIndexSize=374, memstoreSize=339, compactionQueueSize=0,
>> > usedHeap=3990, maxHeap=3991, blockCacheSize=706107016,
>> > blockCacheFree=131010552, blockCacheCount=10631, blockCacheHitRatio=58,
>> > fsReadLatency=0, fsWriteLatency=0, fsSyncLatency=0
>> > 2011-04-12 00:16:31,848 INFO
>> > org.apache.hadoop.hbase.regionserver.HRegionServer: worker thread exiting
>> >
>> >
>> > --
>> > Thanks & Best regards
>> > jiajun
>> >
>>
>
>
>
> --
> Thanks & Best regards
> jiajun
>

Re: too many regions cause OME ?

Posted by 陈加俊 <cj...@gmail.com>.
Can I limit the numbers of regions  on one RegionServer ?

On Tue, Apr 12, 2011 at 8:37 AM, Jean-Daniel Cryans <jd...@apache.org>wrote:

> It's really a lot yes, but it could also be weird configurations or
> too big values.
>
> J-D
>
> On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 <cj...@gmail.com> wrote:
> > Is it too many regions ? Is the memory enough ?
> > HBase-0.20.6
> >
> > 2011-04-12 00:16:31,844 FATAL
> > org.apache.hadoop.hbase.regionserver.HRegionServer: OutOfMemoryError,
> > aborting.
> > java.lang.OutOfMemoryError: Java heap space
> >        at
> java.io.BufferedInputStream.<init>(BufferedInputStream.java:178)
> >        at
> >
> org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1380)
> >        at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1648)
> >        at
> > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
> >        at java.io.DataInputStream.readFully(DataInputStream.java:178)
> >        at java.io.DataInputStream.readFully(DataInputStream.java:152)
> >        at
> >
> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
> >        at
> > org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
> >        at
> >
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
> >        at
> > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
> >        at
> > org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
> >        at
> > org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
> >        at
> org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1636)
> >        at
> > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:321)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1566)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1533)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1453)
> >        at java.lang.Thread.run(Thread.java:619)
> > 2011-04-12 00:16:31,847 INFO
> > org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> > request=0.0, regions=1624, stores=3075, storefiles=2526,
> > storefileIndexSize=374, memstoreSize=339, compactionQueueSize=0,
> > usedHeap=3990, maxHeap=3991, blockCacheSize=706107016,
> > blockCacheFree=131010552, blockCacheCount=10631, blockCacheHitRatio=58,
> > fsReadLatency=0, fsWriteLatency=0, fsSyncLatency=0
> > 2011-04-12 00:16:31,848 INFO
> > org.apache.hadoop.hbase.regionserver.HRegionServer: worker thread exiting
> >
> >
> > --
> > Thanks & Best regards
> > jiajun
> >
>



-- 
Thanks & Best regards
jiajun

Re: too many regions cause OME ?

Posted by Jean-Daniel Cryans <jd...@apache.org>.
It's really a lot yes, but it could also be weird configurations or
too big values.

J-D

On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 <cj...@gmail.com> wrote:
> Is it too many regions ? Is the memory enough ?
> HBase-0.20.6
>
> 2011-04-12 00:16:31,844 FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: OutOfMemoryError,
> aborting.
> java.lang.OutOfMemoryError: Java heap space
>        at java.io.BufferedInputStream.<init>(BufferedInputStream.java:178)
>        at
> org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1380)
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1648)
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>        at java.io.DataInputStream.readFully(DataInputStream.java:178)
>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>        at
> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
>        at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
>        at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
>        at
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
>        at
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
>        at
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>        at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1636)
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:321)
>        at
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1566)
>        at
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1533)
>        at
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1453)
>        at java.lang.Thread.run(Thread.java:619)
> 2011-04-12 00:16:31,847 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> request=0.0, regions=1624, stores=3075, storefiles=2526,
> storefileIndexSize=374, memstoreSize=339, compactionQueueSize=0,
> usedHeap=3990, maxHeap=3991, blockCacheSize=706107016,
> blockCacheFree=131010552, blockCacheCount=10631, blockCacheHitRatio=58,
> fsReadLatency=0, fsWriteLatency=0, fsSyncLatency=0
> 2011-04-12 00:16:31,848 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: worker thread exiting
>
>
> --
> Thanks & Best regards
> jiajun
>

Re: too many regions cause OME ?

Posted by 陈加俊 <cj...@gmail.com>.
There is one table has  1.4T*3(replication) data .

On Tue, Apr 12, 2011 at 8:38 AM, Doug Meil <do...@explorysmedical.com>wrote:

>
> Re:  " maxHeap=3991"
>
> Seems like an awful lot of data to put in a 4gb  heap.
>
> -----Original Message-----
> From: 陈加俊 [mailto:cjjvictory@gmail.com]
> Sent: Monday, April 11, 2011 8:35 PM
> To: hbase-user@hadoop.apache.org
> Subject: too many regions cause OME ?
>
> Is it too many regions ? Is the memory enough ?
> HBase-0.20.6
>
> 2011-04-12 00:16:31,844 FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: OutOfMemoryError,
> aborting.
> java.lang.OutOfMemoryError: Java heap space
>        at java.io.BufferedInputStream.<init>(BufferedInputStream.java:178)
>        at
>
> org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1380)
>        at
>
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1648)
>        at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
>        at java.io.DataInputStream.readFully(DataInputStream.java:178)
>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>        at
>
> org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
>        at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
>        at
> org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
>        at
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
>        at
> org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
>        at
> org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
>        at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
>        at
>
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1636)
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:321)
>        at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1566)
>        at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1533)
>        at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1453)
>        at java.lang.Thread.run(Thread.java:619)
> 2011-04-12 00:16:31,847 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> request=0.0, regions=1624, stores=3075, storefiles=2526,
> storefileIndexSize=374, memstoreSize=339, compactionQueueSize=0,
> usedHeap=3990, maxHeap=3991, blockCacheSize=706107016,
> blockCacheFree=131010552, blockCacheCount=10631, blockCacheHitRatio=58,
> fsReadLatency=0, fsWriteLatency=0, fsSyncLatency=0
> 2011-04-12 00:16:31,848 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: worker thread exiting
>
>
> --
> Thanks & Best regards
> jiajun
>



-- 
Thanks & Best regards
jiajun

RE: too many regions cause OME ?

Posted by Doug Meil <do...@explorysmedical.com>.
Re:  " maxHeap=3991"

Seems like an awful lot of data to put in a 4gb  heap.

-----Original Message-----
From: 陈加俊 [mailto:cjjvictory@gmail.com] 
Sent: Monday, April 11, 2011 8:35 PM
To: hbase-user@hadoop.apache.org
Subject: too many regions cause OME ?

Is it too many regions ? Is the memory enough ?
HBase-0.20.6

2011-04-12 00:16:31,844 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: OutOfMemoryError, aborting.
java.lang.OutOfMemoryError: Java heap space
        at java.io.BufferedInputStream.<init>(BufferedInputStream.java:178)
        at
org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1380)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1648)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
        at java.io.DataInputStream.readFully(DataInputStream.java:178)
        at java.io.DataInputStream.readFully(DataInputStream.java:152)
        at
org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793)
        at
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273)
        at
org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129)
        at
org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410)
        at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
        at
org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1636)
        at
org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:321)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1566)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1533)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1453)
        at java.lang.Thread.run(Thread.java:619)
2011-04-12 00:16:31,847 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
request=0.0, regions=1624, stores=3075, storefiles=2526, storefileIndexSize=374, memstoreSize=339, compactionQueueSize=0, usedHeap=3990, maxHeap=3991, blockCacheSize=706107016, blockCacheFree=131010552, blockCacheCount=10631, blockCacheHitRatio=58, fsReadLatency=0, fsWriteLatency=0, fsSyncLatency=0
2011-04-12 00:16:31,848 INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: worker thread exiting


--
Thanks & Best regards
jiajun