You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by 冯宏华 <fe...@xiaomi.com> on 2014/02/26 09:08:37 UTC

答复: egionTooBusyException: Above memstore limit

Would you please provide the log of the region server serving the 'busy' region : regionName=test-table,doc-id-843162,1393341942533.8477b42b33d2fe9abb2b25a4e5e94b24. ? HBASE-10499 is an issue where a problematic region continuously throws out RegionTooBusyException due to the fact that it's never flushed. Just want to confirm if your problem is a reproduce of that issue. Thanks.
________________________________________
发件人: shapoor [esmaili_607@yahoo.com]
发送时间: 2014年2月26日 0:16
收件人: user@hbase.apache.org
主题: egionTooBusyException: Above memstore limit

hello,
I just got this exception from hbase-0.96.1.1-hadoop2 while sending too many
read/write requests for a longer time. Should something be configured?

14/02/25 16:26:37 INFO client.AsyncProcess: #3, waiting for some tasks to
finish. Expected max=0, tasksSent=898465, tasksDone=898464,
currentTasksDone=898464, retries=215 hasError=false, tableName=test-table
14/02/25 16:26:37 INFO client.AsyncProcess: #3, table=test-table,
attempt=12/35 failed 1 ops, last exception:
org.apache.hadoop.hbase.RegionTooBusyException:
org.apache.hadoop.hbase.RegionTooBusyException: Above memstore limit,
regionName=test-table,doc-id-843162,1393341942533.8477b42b33d2fe9abb2b25a4e5e94b24.,
server=kcs-testhadoop01,60020,1393335573827, memstoreSize=268441392,
blockingMemStoreSize=268435456
        at
org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:2546)
        at
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:1948)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4043)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3354)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3258)
        at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26935)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2175)
        at
org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1879)
 on kcs-testhadoop01,60020,1393335573827, tracking started Tue Feb 25
16:25:49 CET 2014, retrying after 20150 ms, replay 1 ops.

regards,



--
View this message in context: http://apache-hbase.679495.n3.nabble.com/egionTooBusyException-Above-memstore-limit-tp4056339.html
Sent from the HBase User mailing list archive at Nabble.com.

答复: 答复: egionTooBusyException: Above memstore limit

Posted by 冯宏华 <fe...@xiaomi.com>.
btw: Any chance to provide the log of the regionserver that serves the problematic region for which RegionTooBusyException is thrown? (hbase-0.96.1.1), thanks
________________________________________
发件人: shapoor [esmaili_607@yahoo.com]
发送时间: 2014年2月26日 18:30
收件人: user@hbase.apache.org
主题: Re: 答复: egionTooBusyException: Above memstore limit

This is what I get from hbase 0.94 running the same task that lead to
org.apache.hadoop.hbase.RegionTooBusyException
in hbase 0.96.1.1-hadoop2
sometimes I get the feeling that I might not use full hbase capacity having
unconfigured featured.
What could solve this issue?

WARN client.HConnectionManager$HConnectionImplementation: Failed all from
region=test-table,doc-id-55157,1393408719943.2c75f461955aa1a1bd319177fa82b1fa.,
hostname=kcs-testhadoop01, port=60020
java.util.concurrent.ExecutionException: java.net.SocketTimeoutException:
Call to kcs-testhadoop01/192.168.111.210:60020 failed on socket timeout
exception: java.net.SocketTimeoutException: 60000 millis timeout while
waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/192.168.111.210:37947
remote=kcs-testhadoop01/192.168.111.210:60020]
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:188)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1598)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1450)
        at
org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:750)
        at
at.myPackage.backends.HbaseStorage.putDocument(HbaseStorage.java:259)
        at at.myPackage.evaluationTool.Evaluate.save(Evaluate.java:185)
        at
at.myPackage.evaluationTool.Evaluate.performSaveEvaluation(Evaluate.java:136)
        at at.myPackage.evaluationTool.Evaluate.evaluate(Evaluate.java:73)
        at
at.myPackage.evaluationTool.EvaluationTool.executeEvaluation(EvaluationTool.java:127)
        at
at.myPackage.evaluationTool.EvaluationTool.run(EvaluationTool.java:160)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.SocketTimeoutException: Call to
kcs-testhadoop01/192.168.111.210:60020 failed on socket timeout exception:
java.net.SocketTimeoutException: 60000 millis timeout while waiting for
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
local=/192.168.111.210:37947 remote=kcs-testhadoop01/192.168.111.210:60020]
        at
org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:1026)
        at
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:999)
        at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
        at com.sun.proxy.$Proxy20.multi(Unknown Source)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1427)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1425)
        at
org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:215)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1434)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1422)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        ... 3 more
Caused by: java.net.SocketTimeoutException: 60000 millis timeout while
waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/192.168.111.210:37947
remote=kcs-testhadoop01/192.168.111.210:60020]
        at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
        at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:373)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
        at java.io.DataInputStream.readInt(DataInputStream.java:387)
        at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:646)

thx,



--
View this message in context: http://apache-hbase.679495.n3.nabble.com/RegionTooBusyException-Above-memstore-limit-tp4056339p4056398.html
Sent from the HBase User mailing list archive at Nabble.com.

答复: 答复: 答复: egionTooBusyException: Above memstore limit

Posted by 冯宏华 <fe...@xiaomi.com>.
bq. I haven't set the number of the regions my self at the beginning.

If no pre-split, there is only one region for each table from the beginning, so all the writes go to this single region...very possible to incur SocketTimeoutException(0.94) or RegionTooBusyException(0.96+). 

You can pre-split when creating the table to make each regionserver to serve at least 10+ regions, that would help a lot in term of write-load balance
________________________________________
发件人: shapoor [esmaili_607@yahoo.com]
发送时间: 2014年2月27日 1:50
收件人: user@hbase.apache.org
主题: Re: 答复: 答复: egionTooBusyException: Above memstore limit

I haven't set the number of the regions my self at the beginning. In 0.94
with region size of 10 gig, I start with one region and after around 250 gig
of saves I see 60 regions are running and somewhere around here the timeout
exception flies around.

java.util.concurrent.ExecutionException: java.net.SocketTimeoutException:
Call to kcs-testhadoop01/192.168.111.210:60020 failed on socket timeout
exception: java.net.SocketTimeou
tException: 60000 millis timeout while waiting for channel to be ready for
read. ch : java.nio.channels.SocketChannel[connected
local=/192.168.111.210:35248 remote=kcs-testhadoop01/192.168.111.210:60020]
...

But hbase continues the jobs and I have now reached over 300 gig of saves.
for each 10000 saves, there are 100 loads in the process. I will do the same
again with 0.96 and let you know but for 0.94 it is still running with only
one exception as I described. but I am sure more will come.

regards.



--
View this message in context: http://apache-hbase.679495.n3.nabble.com/RegionTooBusyException-Above-memstore-limit-tp4056339p4056413.html
Sent from the HBase User mailing list archive at Nabble.com.

Re: 答复: 答复: egionTooBusyException: Above memstore limit

Posted by shapoor <es...@yahoo.com>.
I haven't set the number of the regions my self at the beginning. In 0.94
with region size of 10 gig, I start with one region and after around 250 gig
of saves I see 60 regions are running and somewhere around here the timeout
exception flies around. 

java.util.concurrent.ExecutionException: java.net.SocketTimeoutException:
Call to kcs-testhadoop01/192.168.111.210:60020 failed on socket timeout
exception: java.net.SocketTimeou
tException: 60000 millis timeout while waiting for channel to be ready for
read. ch : java.nio.channels.SocketChannel[connected
local=/192.168.111.210:35248 remote=kcs-testhadoop01/192.168.111.210:60020]
...

But hbase continues the jobs and I have now reached over 300 gig of saves.
for each 10000 saves, there are 100 loads in the process. I will do the same
again with 0.96 and let you know but for 0.94 it is still running with only
one exception as I described. but I am sure more will come.

regards.



--
View this message in context: http://apache-hbase.679495.n3.nabble.com/RegionTooBusyException-Above-memstore-limit-tp4056339p4056413.html
Sent from the HBase User mailing list archive at Nabble.com.

答复: 答复: egionTooBusyException: Above memstore limit

Posted by 冯宏华 <fe...@xiaomi.com>.
0.94 doesn't throws RegionTooBusyException when memstore exceeds blockingMemstore...it waits in regionserver, that's why you gets TimeoutException from client side. Nicolas has said this in above mail.

Maybe you can try some actions suggested in above mails such as split out more regions to balance the write pressure, randomize the rowKey to eliminate hotspot, and so on.

How many regions in your table? Do all regions encounter such RegionTooBusyException(in 0.96+) or SocketTimeoutException(in 0.94)?
________________________________________
发件人: shapoor [esmaili_607@yahoo.com]
发送时间: 2014年2月26日 18:30
收件人: user@hbase.apache.org
主题: Re: 答复: egionTooBusyException: Above memstore limit

This is what I get from hbase 0.94 running the same task that lead to
org.apache.hadoop.hbase.RegionTooBusyException
in hbase 0.96.1.1-hadoop2
sometimes I get the feeling that I might not use full hbase capacity having
unconfigured featured.
What could solve this issue?

WARN client.HConnectionManager$HConnectionImplementation: Failed all from
region=test-table,doc-id-55157,1393408719943.2c75f461955aa1a1bd319177fa82b1fa.,
hostname=kcs-testhadoop01, port=60020
java.util.concurrent.ExecutionException: java.net.SocketTimeoutException:
Call to kcs-testhadoop01/192.168.111.210:60020 failed on socket timeout
exception: java.net.SocketTimeoutException: 60000 millis timeout while
waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/192.168.111.210:37947
remote=kcs-testhadoop01/192.168.111.210:60020]
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:188)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1598)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1450)
        at
org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:750)
        at
at.myPackage.backends.HbaseStorage.putDocument(HbaseStorage.java:259)
        at at.myPackage.evaluationTool.Evaluate.save(Evaluate.java:185)
        at
at.myPackage.evaluationTool.Evaluate.performSaveEvaluation(Evaluate.java:136)
        at at.myPackage.evaluationTool.Evaluate.evaluate(Evaluate.java:73)
        at
at.myPackage.evaluationTool.EvaluationTool.executeEvaluation(EvaluationTool.java:127)
        at
at.myPackage.evaluationTool.EvaluationTool.run(EvaluationTool.java:160)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.SocketTimeoutException: Call to
kcs-testhadoop01/192.168.111.210:60020 failed on socket timeout exception:
java.net.SocketTimeoutException: 60000 millis timeout while waiting for
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
local=/192.168.111.210:37947 remote=kcs-testhadoop01/192.168.111.210:60020]
        at
org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:1026)
        at
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:999)
        at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
        at com.sun.proxy.$Proxy20.multi(Unknown Source)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1427)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1425)
        at
org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:215)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1434)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1422)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        ... 3 more
Caused by: java.net.SocketTimeoutException: 60000 millis timeout while
waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/192.168.111.210:37947
remote=kcs-testhadoop01/192.168.111.210:60020]
        at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
        at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:373)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
        at java.io.DataInputStream.readInt(DataInputStream.java:387)
        at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:646)

thx,



--
View this message in context: http://apache-hbase.679495.n3.nabble.com/RegionTooBusyException-Above-memstore-limit-tp4056339p4056398.html
Sent from the HBase User mailing list archive at Nabble.com.

Re: 答复: egionTooBusyException: Above memstore limit

Posted by shapoor <es...@yahoo.com>.
This is what I get from hbase 0.94 running the same task that lead to  
org.apache.hadoop.hbase.RegionTooBusyException
in hbase 0.96.1.1-hadoop2
sometimes I get the feeling that I might not use full hbase capacity having
unconfigured featured.
What could solve this issue?

WARN client.HConnectionManager$HConnectionImplementation: Failed all from
region=test-table,doc-id-55157,1393408719943.2c75f461955aa1a1bd319177fa82b1fa.,
hostname=kcs-testhadoop01, port=60020
java.util.concurrent.ExecutionException: java.net.SocketTimeoutException:
Call to kcs-testhadoop01/192.168.111.210:60020 failed on socket timeout
exception: java.net.SocketTimeoutException: 60000 millis timeout while
waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/192.168.111.210:37947
remote=kcs-testhadoop01/192.168.111.210:60020]
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:188)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1598)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1450)
        at
org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:750)
        at
at.myPackage.backends.HbaseStorage.putDocument(HbaseStorage.java:259)
        at at.myPackage.evaluationTool.Evaluate.save(Evaluate.java:185)
        at
at.myPackage.evaluationTool.Evaluate.performSaveEvaluation(Evaluate.java:136)
        at at.myPackage.evaluationTool.Evaluate.evaluate(Evaluate.java:73)
        at
at.myPackage.evaluationTool.EvaluationTool.executeEvaluation(EvaluationTool.java:127)
        at
at.myPackage.evaluationTool.EvaluationTool.run(EvaluationTool.java:160)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.SocketTimeoutException: Call to
kcs-testhadoop01/192.168.111.210:60020 failed on socket timeout exception:
java.net.SocketTimeoutException: 60000 millis timeout while waiting for
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
local=/192.168.111.210:37947 remote=kcs-testhadoop01/192.168.111.210:60020]
        at
org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:1026)
        at
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:999)
        at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
        at com.sun.proxy.$Proxy20.multi(Unknown Source)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1427)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1425)
        at
org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:215)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1434)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1422)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        ... 3 more
Caused by: java.net.SocketTimeoutException: 60000 millis timeout while
waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/192.168.111.210:37947
remote=kcs-testhadoop01/192.168.111.210:60020]
        at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
        at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:373)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
        at java.io.DataInputStream.readInt(DataInputStream.java:387)
        at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:646)

thx,



--
View this message in context: http://apache-hbase.679495.n3.nabble.com/RegionTooBusyException-Above-memstore-limit-tp4056339p4056398.html
Sent from the HBase User mailing list archive at Nabble.com.

Re: 答复: egionTooBusyException: Above memstore limit

Posted by Ted Yu <yu...@gmail.com>.
HBASE-8755 introduced new write thread model. 
It is integrated in the recently released 0.98.0

You can consider giving 0.98.0 a spin. 

FYI

On Feb 26, 2014, at 1:03 AM, shapoor <es...@yahoo.com> wrote:

> I have a task which comes with a lot of requests. I had the same issue with
> 0.94. I managed to solve it with letting my requests waiting longer if the
> traffic is high and they have to wait (time_wait expansion). I put the
> following properties in hbase-site.xml:
> 
>  <property>
>    <name>hbase.rpc.timeout</name>
>    <value>120000</value>
>    <description>
>    </description>
>  </property>
> 
>  <property>
>    <name>hbase.regionserver.lease.period</name>
>    <value>120000</value>
>    <description>
>    </description>
>  </property>
> 
> Now I changed from 0.96.1.1-hadoop2 to 0.94 again and it didn't throw any
> exceptions with 0.94 as before. But I guess this setting doesnt work for
> 0.96.1.1-hadoop2 as "Nicolas Liochon" sujested in this post. 
> For the log, I have to switch back to 0.96.1.1-hadoop2 again and run the
> task which I will do in the following days and let you know about that.
> 
> thx and regards,
> 
> 
> 
> --
> View this message in context: http://apache-hbase.679495.n3.nabble.com/RegionTooBusyException-Above-memstore-limit-tp4056339p4056395.html
> Sent from the HBase User mailing list archive at Nabble.com.

Re: 答复: egionTooBusyException: Above memstore limit

Posted by shapoor <es...@yahoo.com>.
I have a task which comes with a lot of requests. I had the same issue with
0.94. I managed to solve it with letting my requests waiting longer if the
traffic is high and they have to wait (time_wait expansion). I put the
following properties in hbase-site.xml:

  <property>
    <name>hbase.rpc.timeout</name>
    <value>120000</value>
    <description>
    </description>
  </property>

  <property>
    <name>hbase.regionserver.lease.period</name>
    <value>120000</value>
    <description>
    </description>
  </property>

Now I changed from 0.96.1.1-hadoop2 to 0.94 again and it didn't throw any
exceptions with 0.94 as before. But I guess this setting doesnt work for
0.96.1.1-hadoop2 as "Nicolas Liochon" sujested in this post. 
For the log, I have to switch back to 0.96.1.1-hadoop2 again and run the
task which I will do in the following days and let you know about that.

thx and regards,



--
View this message in context: http://apache-hbase.679495.n3.nabble.com/RegionTooBusyException-Above-memstore-limit-tp4056339p4056395.html
Sent from the HBase User mailing list archive at Nabble.com.