You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Amit Mor <am...@gmail.com> on 2013/06/18 23:48:12 UTC

client bursts followed by drops

Hello,

We use hbase 0.94.2 and we are seeing (mostly reading) latency issues, and
with an interesting twist: client (100 threads) get stuck waiting on HBase,
then stops sending RPC's and then seems to be freed. When freed, while
trying to flush all request it had queued, it causes another vicious cycle
of burst - drop - burst.
I am seeing about 40K requests per seconds per RS.
The cluster is mostly read, no compaction and split storms. Just bursts and
bursts.
We set 30 rpc handlers per RS (avg scan size is 5K) and they most of the
time seem to be WAITING. IPC at DEBUG revealed (see below) that sometimes
the queueTime is very big and sometimes the responseTime is very big, but
at a very low percentage.
The burst seem to be periodic and uncorrelated to GC activity or CPU (the
burst appear the moment the RS is onlined and the heap is free)
The row keys are murmur3 hashed, and I don't really see any hotspoting

Any idea what might cause those bursts ?

Thanks,
Amit

Re: client bursts followed by drops

Posted by Jean-Daniel Cryans <jd...@apache.org>.
None of your attachements made it across, this mailing list often (but
not always) strips them.

Are you able to jstack when the drops happen and the queue time is
high? This could be https://issues.apache.org/jira/browse/HBASE-5898
but it seems a long stretch without more info.

You could also try to see if a more recent version exhibits the same behavior.

J-D

On Wed, Jun 19, 2013 at 1:09 AM, Amit Mor <am...@gmail.com> wrote:
> Attachment of JMX requests metric showing bursts on RS
>
>
> On Wed, Jun 19, 2013 at 12:48 AM, Amit Mor <am...@gmail.com> wrote:
>>
>> Hello,
>>
>> We use hbase 0.94.2 and we are seeing (mostly reading) latency issues, and
>> with an interesting twist: client (100 threads) get stuck waiting on HBase,
>> then stops sending RPC's and then seems to be freed. When freed, while
>> trying to flush all request it had queued, it causes another vicious cycle
>> of burst - drop - burst.
>> I am seeing about 40K requests per seconds per RS.
>> The cluster is mostly read, no compaction and split storms. Just bursts
>> and bursts.
>> We set 30 rpc handlers per RS (avg scan size is 5K) and they most of the
>> time seem to be WAITING. IPC at DEBUG revealed (see below) that sometimes
>> the queueTime is very big and sometimes the responseTime is very big, but at
>> a very low percentage.
>> The burst seem to be periodic and uncorrelated to GC activity or CPU (the
>> burst appear the moment the RS is onlined and the heap is free)
>> The row keys are murmur3 hashed, and I don't really see any hotspoting
>>
>> Any idea what might cause those bursts ?
>>
>> Thanks,
>> Amit
>>
>

Re: client bursts followed by drops

Posted by Amit Mor <am...@gmail.com>.
Attachment of JMX requests metric showing bursts on RS


On Wed, Jun 19, 2013 at 12:48 AM, Amit Mor <am...@gmail.com> wrote:

> Hello,
>
> We use hbase 0.94.2 and we are seeing (mostly reading) latency issues, and
> with an interesting twist: client (100 threads) get stuck waiting on HBase,
> then stops sending RPC's and then seems to be freed. When freed, while
> trying to flush all request it had queued, it causes another vicious cycle
> of burst - drop - burst.
> I am seeing about 40K requests per seconds per RS.
> The cluster is mostly read, no compaction and split storms. Just bursts
> and bursts.
> We set 30 rpc handlers per RS (avg scan size is 5K) and they most of the
> time seem to be WAITING. IPC at DEBUG revealed (see below) that sometimes
> the queueTime is very big and sometimes the responseTime is very big, but
> at a very low percentage.
> The burst seem to be periodic and uncorrelated to GC activity or CPU (the
> burst appear the moment the RS is onlined and the heap is free)
> The row keys are murmur3 hashed, and I don't really see any hotspoting
>
> Any idea what might cause those bursts ?
>
> Thanks,
> Amit
>
>