You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by 苏铖 <su...@lietou.com> on 2012/10/30 09:00:29 UTC

答复: 答复: How to adjust hbase settings when too many store files?

My code looks like below:
-----------------------------------------
final HBaseConnection hbaseConn =
HBaseConnectionFactory.getHBaseConnection();
final HTable htable = hbaseConn.getHTable(table.getDestName());
// htable.put
htable.close();
hbaseConn.close();
-----------------------------------------

HBaseConnection is only a warpper class of HBaseAdmin.
-----------------------------------------
public final class HBaseConnection {

	private final HBaseAdmin admin;
	
	HBaseConnection(HBaseAdmin admin) {
		this.admin = admin;
	}
	
	public void close() throws HBaseConnectionException {
		try {
	        admin.close();
        } catch (IOException e) {
        	throw new HBaseConnectionException(e);
        }
	}
	
	public HTable getHTable(String tableName) throws
HBaseConnectionException {
		try {
	        return new HTable(admin.getConfiguration(), tableName);
        } catch (IOException e) {
	        throw new HBaseConnectionException(e);
        }
	}
}
---------------------------------------

I changed hbase settings list as blow this morning. And the program runs
well by now.
I will report the result when the program finished.

Thank you all.

<!-- increase lease time and rpc timeto avoid scan exception -->
  <property>
      <name>hbase.regionserver.lease.period</name>
      <value>90000</value>
      <description>HRegion server lease period in milliseconds. Default is
                  60 seconds. Clients must report in within this period else
they are
                      considered dead.</description>
  </property>
  <property>
      <name>hbase.rpc.timeout</name>
      <value>120000</value>
  </property>

<!-- increse client write buffer and regionserver handler count -->
    <property>
        <name>hbase.client.write.buffer</name>
        <value>4194304</value>
    </property>
    <property>
        <name>hbase.regionserver.handler.count</name>
        <value>20</value>
    </property>

<!-- increase compaction Threshold -->
        <property>
                <name>hbase.hstore.compactionThreshold</name>
                <value>7</value>
        </property>
        <property>
                <name>hbase.hstore.blockingStoreFiles</name>
                <value>13</value>
        </property>

-----邮件原件-----
发件人: ramkrishna vasudevan [mailto:ramkrishna.s.vasudevan@gmail.com] 
发送时间: 2012年10月30日 14:46
收件人: user@hbase.apache.org
主题: Re: 答复: How to adjust hbase settings when too many store files?

Hi

Can you see if your hTable instances are shared across different threads.
 that could be the reason for you null pointer excepiton.

Regards
Ram

On Tue, Oct 30, 2012 at 7:22 AM, xkwang bruce
<br...@gmail.com>wrote:

> Hi,苏铖.
>
> U may need presplit you htable when the load is heavy or there should be
> some problem in your client code.
> Just a suggestion.
>
> bruce
>
>
> 2012/10/29 苏铖 <su...@lietou.com>
>
> > Hi, everyone.
> >
> > I changed the max size of hbase store file and increased region servers.
> > The
> > former exception doesn't happen again.
> > But there is another exception at the client side.
> >
> > 2012-10-29 19:06:27:758 WARN [pool-2-thread-2]
> >
>
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation
> > | Failed all from
> >
> >
>
region=statistic_visit_detail1,,1351508069797.3272dd30817191d9d393d1d6e1b99d
> > 1b., hostname=hadoop02, port=60020
> > java.util.concurrent.ExecutionException: java.lang.RuntimeException:
> > java.lang.NullPointerException
> >         at
> > java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> >         at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> >         at
> >
> >
>
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.
> > processBatchCallback(HConnectionManager.java:1557)
> >         at
> >
> >
>
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.
> > processBatch(HConnectionManager.java:1409)
> >         at
> > org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:900)
> >         at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:773)
> >         at org.apache.hadoop.hbase.client.HTable.put(HTable.java:760)
> >         at
> > com.lietou.datawarehouse.imp.HBaseImporter$ActualHBaseImporter$1.
> > process(HBaseImporter.java:150)
> >         at
> > com.lietou.datawarehouse.imp.HBaseImporter$ActualHBaseImporter$1.
> > process(HBaseImporter.java:133)
> >         at
> >
> >
>
com.lietou.datawarehouse.common.range.Repeater.rangeRepeat(Repeater.java:48)
> >         at
> >
> >
>
com.lietou.datawarehouse.common.range.Repeater.rangeRepeat(Repeater.java:30)
> >         at
> >
> >
>
com.lietou.datawarehouse.imp.HBaseImporter$ActualHBaseImporter.run(HBaseImpo
> > rter.java:162)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.
ja
> > va:886)
> >         at
> >
> >
>
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:9
> > 08)
> >         at java.lang.Thread.run(Thread.java:662)
> > Caused by: java.lang.RuntimeException: java.lang.NullPointerException
> >         at
> >
> >
>
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.
> > getRegionServerWithoutRetries(HConnectionManager.java:1371)
> >         at
> >
> >
>
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$
> > 3.call(HConnectionManager.java:1383)
> >         at
> >
> >
>
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$
> > 3.call(HConnectionManager.java:1381)
> >         at
> > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> >         ... 3 more
> > Caused by: java.lang.NullPointerException
> >         at
> >
> >
>
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngi
> > ne.java:158)
> >         at $Proxy10.multi(Unknown Source)
> >         at
> >
> >
>
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$
> > 3$1.call(HConnectionManager.java:1386)
> >         at
> >
> >
>
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$
> > 3$1.call(HConnectionManager.java:1384)
> >         at
> >
> >
>
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.
> > getRegionServerWithoutRetries(HConnectionManager.java:1365)
> >         ... 7 more
> >
> >
> > This error happens quite offen. And on the server side, here are some
> > warnings
> >
> > 2012-10-29 19:50:39,748 WARN org.apache.hadoop.ipc.HBaseServer:
> > (responseTooSlow):
> >
> >
>
{"processingtimems":17476,"call":"multi(org.apache.hadoop.hbase.client.Multi
> > Action@62aaeb8e), rpc version=1, client version=29,
> > methodsFingerPrint=54742778","client":"192.168.1.70:3237
> > ","starttimems":1351
> >
> >
>
511422270,"queuetimems":0,"class":"HRegionServer","responsesize":0,"method":
> > "multi"}
> >
> >
> > I have only 5 threads to execute put actions at the same time.
> > The machine load is not very high.
> >
> > top - 19:55:02 up  7:56,  4 users,  load average: 1.62, 1.36, 1.11
> >
> > Anyone met this error before? Please help me.
> >
> > Thanks.
> >
> >
> > -----邮件原件-----
> > 发件人: 苏铖 [mailto:sucheng@lietou.com]
> > 发送时间: 2012年10月29日 15:53
> > 收件人: user@hbase.apache.org
> > 主题: 答复: How to adjust hbase settings when too many store files?
> >
> > I checked the region server log again, and I found something below:
> >
> > 2012-10-28 06:24:24,811 ERROR
> > org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> > Compaction failed regionName=stati
> >
> >
>
stic_visit_detail,20120922|13984|451728,1351376659451.9b2bfae5d77109693a153e
> > b16fcb7793., storeName=cf1, fileCount=7, fileSize=1.1g (
> > 681.7m, 168.6m, 139.9m, 36.2m, 53.0m, 26.5m, 5.9m), priority=0,
> > time=469259302083252
> > java.io.IOException: java.io.IOException: File
> >
> >
>
/hbase/statistic_visit_detail/9b2bfae5d77109693a153eb16fcb7793/.tmp/3a3e6ee8
> > 8a524659b
> > 9f9716e5ca21a74 could only be replicated to 0 nodes, instead of 1
> >         at
> >
> >
>
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNam
> > esystem.java:1531)
> >         at
> >
>
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:685)
> >         at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
> >         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown
> Source)
> >         at java.lang.reflect.Method.invoke(Unknown Source)
> >         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> >
> > It seems that, the fileSize already exceed the max size, which is 1G as
> > default.
> >
> > So I add more region servers and enlarge the max file size, to see if it
> > works.
> >
> > Thanks a lot.
> >
> > -----邮件原件-----
> > 发件人: Ramkrishna.S.Vasudevan [mailto:ramkrishna.vasudevan@huawei.com]
> > 发送时间: 2012年10月29日 14:06
> > 收件人: user@hbase.apache.org
> > 主题: RE: How to adjust hbase settings when too many store files?
> >
> > Also check what is your heap size of RS?
> > When you say hTable.put(), how many such threads are there?
> >
> > What is your region size?  Is your regions splitting continuously do to
> > heavy load?
> >
> > Regards
> > Ram
> >
> >
> > > -----Original Message-----
> > > From: yuzhihong@gmail.com [mailto:yuzhihong@gmail.com]
> > > Sent: Monday, October 29, 2012 10:01 AM
> > > To: user@hbase.apache.org
> > > Cc: <us...@hbase.apache.org>
> > > Subject: Re: How to adjust hbase settings when too many store files?
> > >
> > > What version of hbase were you using ?
> > > Did you pre split the table before loading ?
> > >
> > > Thanks
> > >
> > >
> > >
> > > On Oct 28, 2012, at 8:33 PM, 苏铖 <su...@lietou.com> wrote:
> > >
> > > > Hello. I encounter a region server error when I try to put bulk data
> > > from a
> > > > java client.
> > > >
> > > > The java client extracts data from a relational database and puts
> > > those data
> > > > into hbase.
> > > >
> > > > When I try to extract data from a large table(say, 1 billion
> > > records), the
> > > > error happens.
> > > >
> > > >
> > > >
> > > > The region server's log says:
> > > >
> > > >
> > > >
> > > >> 2012-10-28 00:00:02,169 WARN
> > > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Region
> > > > statistic_visit_detail,20120804|72495|8549
> > > >
> > > > 56,1351353594195.ad2592ee7a3610c60c47cf8be77496c8. has too many
store
> > > files;
> > > > delaying flush up to 90000ms
> > > >
> > > >> 2012-10-28 00:00:02,791 DEBUG
> > > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush thread
> > > woke up
> > > > because memory above low wa
> > > >
> > > > ter=347.1m
> > > >
> > > >> 2012-10-28 00:00:02,791 DEBUG
> > > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Under global
> > > heap
> > > > pressure: Region statistic_vis
> > > >
> > > >
> > >
it_detail,20120804|72495|854956,1351353594195.ad2592ee7a3610c60c47cf8be
> > > 77496
> > > > c8. has too many store files, but is 141.5m vs best flus
> > > >
> > > > hable region's 46.8m. Choosing the bigger.
> > > >
> > > >> 2012-10-28 00:00:02,791 INFO
> > > > org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush of
region
> > > > statistic_visit_detail,20120804|7
> > > >
> > > > 2495|854956,1351353594195.ad2592ee7a3610c60c47cf8be77496c8. due to
> > > global
> > > > heap pressure
> > > >
> > > > ...
> > > >
> > > >
> > > >
> > > > And finally,
> > > >
> > > >
> > > >
> > > >> 2012-10-28 00:00:43,511 INFO
> > > org.apache.hadoop.hbase.regionserver.HRegion:
> > > > compaction interrupted by user
> > > >
> > > >> java.io.InterruptedIOException: Aborting compaction of store cf1 in
> > > region
> > > > statistic_visit_detail,20120804|72495|854956,135135359419
> > > >
> > > > 5.ad2592ee7a3610c60c47cf8be77496c8. because user requested stop.
> > > >
> > > >        at
> > > >
> > >
org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.java:1275
> > > )
> > > >
> > > >        at
> > > > org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:765)
> > > >
> > > >        at
> > > >
> > >
org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1023)
> > > >
> > > >        at
> > > >
> > >
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(
> > > Compa
> > > > ctionRequest.java:177)
> > > >
> > > >        at
> > > >
> > >
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecut
> > > or.ja
> > > > va:886)
> > > >
> > > >        at
> > > >
> > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
j
> > > ava:9
> > > > 08)
> > > >
> > > >        at java.lang.Thread.run(Thread.java:662)
> > > >
> > > >
> > > >
> > > > Then the region server shuts down.
> > > >
> > > >
> > > >
> > > > It seems that too many store files(due to too many records from
> > > > relational-db) consumed too many memories, if I'm right.
> > > >
> > > > I'm new to hbase, what settings should I adjust? Or even increase
> > > region
> > > > servers?
> > > >
> > > > I'm going to do some research by myself, and any advise will be
> > > appreciated.
> > > >
> > > > Best regards,
> > > >
> > > >
> > > >
> > > > Su
> > > >
> > > >
> > > >
> >
> >
> >
> >
>