You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by 冯宏华 <fe...@xiaomi.com> on 2014/02/26 12:01:26 UTC

答复: 答复: egionTooBusyException: Above memstore limit

0.94 doesn't throws RegionTooBusyException when memstore exceeds blockingMemstore...it waits in regionserver, that's why you gets TimeoutException from client side. Nicolas has said this in above mail.

Maybe you can try some actions suggested in above mails such as split out more regions to balance the write pressure, randomize the rowKey to eliminate hotspot, and so on.

How many regions in your table? Do all regions encounter such RegionTooBusyException(in 0.96+) or SocketTimeoutException(in 0.94)?
________________________________________
发件人: shapoor [esmaili_607@yahoo.com]
发送时间: 2014年2月26日 18:30
收件人: user@hbase.apache.org
主题: Re: 答复: egionTooBusyException: Above memstore limit

This is what I get from hbase 0.94 running the same task that lead to
org.apache.hadoop.hbase.RegionTooBusyException
in hbase 0.96.1.1-hadoop2
sometimes I get the feeling that I might not use full hbase capacity having
unconfigured featured.
What could solve this issue?

WARN client.HConnectionManager$HConnectionImplementation: Failed all from
region=test-table,doc-id-55157,1393408719943.2c75f461955aa1a1bd319177fa82b1fa.,
hostname=kcs-testhadoop01, port=60020
java.util.concurrent.ExecutionException: java.net.SocketTimeoutException:
Call to kcs-testhadoop01/192.168.111.210:60020 failed on socket timeout
exception: java.net.SocketTimeoutException: 60000 millis timeout while
waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/192.168.111.210:37947
remote=kcs-testhadoop01/192.168.111.210:60020]
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:188)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1598)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1450)
        at
org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:750)
        at
at.myPackage.backends.HbaseStorage.putDocument(HbaseStorage.java:259)
        at at.myPackage.evaluationTool.Evaluate.save(Evaluate.java:185)
        at
at.myPackage.evaluationTool.Evaluate.performSaveEvaluation(Evaluate.java:136)
        at at.myPackage.evaluationTool.Evaluate.evaluate(Evaluate.java:73)
        at
at.myPackage.evaluationTool.EvaluationTool.executeEvaluation(EvaluationTool.java:127)
        at
at.myPackage.evaluationTool.EvaluationTool.run(EvaluationTool.java:160)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.SocketTimeoutException: Call to
kcs-testhadoop01/192.168.111.210:60020 failed on socket timeout exception:
java.net.SocketTimeoutException: 60000 millis timeout while waiting for
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
local=/192.168.111.210:37947 remote=kcs-testhadoop01/192.168.111.210:60020]
        at
org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:1026)
        at
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:999)
        at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
        at com.sun.proxy.$Proxy20.multi(Unknown Source)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1427)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1425)
        at
org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:215)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1434)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1422)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        ... 3 more
Caused by: java.net.SocketTimeoutException: 60000 millis timeout while
waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/192.168.111.210:37947
remote=kcs-testhadoop01/192.168.111.210:60020]
        at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
        at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:373)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
        at java.io.DataInputStream.readInt(DataInputStream.java:387)
        at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:646)

thx,



--
View this message in context: http://apache-hbase.679495.n3.nabble.com/RegionTooBusyException-Above-memstore-limit-tp4056339p4056398.html
Sent from the HBase User mailing list archive at Nabble.com.

答复: 答复: 答复: egionTooBusyException: Above memstore limit

Posted by 冯宏华 <fe...@xiaomi.com>.
bq. I haven't set the number of the regions my self at the beginning.

If no pre-split, there is only one region for each table from the beginning, so all the writes go to this single region...very possible to incur SocketTimeoutException(0.94) or RegionTooBusyException(0.96+). 

You can pre-split when creating the table to make each regionserver to serve at least 10+ regions, that would help a lot in term of write-load balance
________________________________________
发件人: shapoor [esmaili_607@yahoo.com]
发送时间: 2014年2月27日 1:50
收件人: user@hbase.apache.org
主题: Re: 答复: 答复: egionTooBusyException: Above memstore limit

I haven't set the number of the regions my self at the beginning. In 0.94
with region size of 10 gig, I start with one region and after around 250 gig
of saves I see 60 regions are running and somewhere around here the timeout
exception flies around.

java.util.concurrent.ExecutionException: java.net.SocketTimeoutException:
Call to kcs-testhadoop01/192.168.111.210:60020 failed on socket timeout
exception: java.net.SocketTimeou
tException: 60000 millis timeout while waiting for channel to be ready for
read. ch : java.nio.channels.SocketChannel[connected
local=/192.168.111.210:35248 remote=kcs-testhadoop01/192.168.111.210:60020]
...

But hbase continues the jobs and I have now reached over 300 gig of saves.
for each 10000 saves, there are 100 loads in the process. I will do the same
again with 0.96 and let you know but for 0.94 it is still running with only
one exception as I described. but I am sure more will come.

regards.



--
View this message in context: http://apache-hbase.679495.n3.nabble.com/RegionTooBusyException-Above-memstore-limit-tp4056339p4056413.html
Sent from the HBase User mailing list archive at Nabble.com.

Re: 答复: 答复: egionTooBusyException: Above memstore limit

Posted by shapoor <es...@yahoo.com>.
I haven't set the number of the regions my self at the beginning. In 0.94
with region size of 10 gig, I start with one region and after around 250 gig
of saves I see 60 regions are running and somewhere around here the timeout
exception flies around. 

java.util.concurrent.ExecutionException: java.net.SocketTimeoutException:
Call to kcs-testhadoop01/192.168.111.210:60020 failed on socket timeout
exception: java.net.SocketTimeou
tException: 60000 millis timeout while waiting for channel to be ready for
read. ch : java.nio.channels.SocketChannel[connected
local=/192.168.111.210:35248 remote=kcs-testhadoop01/192.168.111.210:60020]
...

But hbase continues the jobs and I have now reached over 300 gig of saves.
for each 10000 saves, there are 100 loads in the process. I will do the same
again with 0.96 and let you know but for 0.94 it is still running with only
one exception as I described. but I am sure more will come.

regards.



--
View this message in context: http://apache-hbase.679495.n3.nabble.com/RegionTooBusyException-Above-memstore-limit-tp4056339p4056413.html
Sent from the HBase User mailing list archive at Nabble.com.