You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Saad Mufti <sa...@gmail.com> on 2018/02/18 23:51:37 UTC

Trying To Understand BucketCache Evictions In HBase 1.3.1

Hi,

We have an HBase system running HBase 1.3.1 on an AWS EMR service. Our
BucketCache is configured for 400 GB on a set of attached EBS disk volumes,
with all column families marked for in-memory in their column family
schemas using INMEMORY => 'true' (except for one column family we only ever
write to, so we set BUCKETCACHE => 'false' on that one).

Even though all column families are marked INMEMORY, we have the following
ratios set:

"hbase.bucketcache.memory.factor":"0.8",

"hbase.bucketcache.single.factor":"0.1",

"hbase.bucketcache.multi.factor":"0.1",

Currently the bucket cache shows evictions even though it has tons of free
space. I am trying to understand why we get any evictions at all? We do
have minor compactions going on, but we have not
set hbase.rs.evictblocksonclose to any value and from looking at the code,
it defaults to false. The total bucket cache size is nowhere near any of
the above limits, in fact on some long running servers where we stopped
traffic, the cache size went down to 0. Which makes me think something is
evicting blocks from the bucket cache in the background.

You can see a screenshot from one of the regionserver L2 stats UI pages at
https://imgur.com/a/2ZUSv . Another interesting thing to me on this page is
that it has non-zero evicted blocks but says Evictions: 0

Any help understanding this would be appreciated.

----
Saad

Re: Trying To Understand BucketCache Evictions In HBase 1.3.1

Posted by Saad Mufti <sa...@gmail.com>.
Thanks, it all makes sense now.

Cheers.

----
Saad

On Mon, Feb 19, 2018 at 5:40 AM Anoop John <an...@gmail.com> wrote:

> Hi
>           Seems you have write ops happening as you mentioned abt
> minor compactions.  When the compaction happens, the compacted file's
> blocks will get evicted.  Whatever be the value of
> 'hbase.rs.evictblocksonclose'.  This config comes to play when the
> Store is closed. Means the region movement is happening or split and
> so a close on stores. Also the table might get disabled or deleted.
> All such store close cases this config comes to picture.  But minor
> compactions means there will be evictions.  These are not via the
> eviction threads which monitor the less spaces and select LRU blocks
> for eviction.  This is done by the compaction threads. That is why you
> can see the evict ops (done by Eviction thread) is zero but the
> #evicted blocks are there.  Those might be the blocks of the compacted
> away files.  Hope this helps you to understand what is going on.
>
> -Anoop-
>
>
> On Mon, Feb 19, 2018 at 5:25 AM, Saad Mufti <sa...@gmail.com> wrote:
> > Sorry I meant BLOCKCACHE => 'false' on the one column family we don't
> want
> > getting cached.
> >
> > Cheers.
> >
> > ----
> > Saad
> >
> >
> > On Sun, Feb 18, 2018 at 6:51 PM, Saad Mufti <sa...@gmail.com>
> wrote:
> >
> >> Hi,
> >>
> >> We have an HBase system running HBase 1.3.1 on an AWS EMR service. Our
> >> BucketCache is configured for 400 GB on a set of attached EBS disk
> volumes,
> >> with all column families marked for in-memory in their column family
> >> schemas using INMEMORY => 'true' (except for one column family we only
> ever
> >> write to, so we set BUCKETCACHE => 'false' on that one).
> >>
> >> Even though all column families are marked INMEMORY, we have the
> following
> >> ratios set:
> >>
> >> "hbase.bucketcache.memory.factor":"0.8",
> >>
> >> "hbase.bucketcache.single.factor":"0.1",
> >>
> >>
> >> "hbase.bucketcache.multi.factor":"0.1",
> >>
> >> Currently the bucket cache shows evictions even though it has tons of
> free
> >> space. I am trying to understand why we get any evictions at all? We do
> >> have minor compactions going on, but we have not set
> hbase.rs.evictblocksonclose
> >> to any value and from looking at the code, it defaults to false. The
> total
> >> bucket cache size is nowhere near any of the above limits, in fact on
> some
> >> long running servers where we stopped traffic, the cache size went down
> to
> >> 0. Which makes me think something is evicting blocks from the bucket
> cache
> >> in the background.
> >>
> >> You can see a screenshot from one of the regionserver L2 stats UI pages
> at
> >> https://imgur.com/a/2ZUSv . Another interesting thing to me on this
> page
> >> is that it has non-zero evicted blocks but says Evictions: 0
> >>
> >> Any help understanding this would be appreciated.
> >>
> >> ----
> >> Saad
> >>
> >>
>

Re: Trying To Understand BucketCache Evictions In HBase 1.3.1

Posted by Anoop John <an...@gmail.com>.
Hi
          Seems you have write ops happening as you mentioned abt
minor compactions.  When the compaction happens, the compacted file's
blocks will get evicted.  Whatever be the value of
'hbase.rs.evictblocksonclose'.  This config comes to play when the
Store is closed. Means the region movement is happening or split and
so a close on stores. Also the table might get disabled or deleted.
All such store close cases this config comes to picture.  But minor
compactions means there will be evictions.  These are not via the
eviction threads which monitor the less spaces and select LRU blocks
for eviction.  This is done by the compaction threads. That is why you
can see the evict ops (done by Eviction thread) is zero but the
#evicted blocks are there.  Those might be the blocks of the compacted
away files.  Hope this helps you to understand what is going on.

-Anoop-


On Mon, Feb 19, 2018 at 5:25 AM, Saad Mufti <sa...@gmail.com> wrote:
> Sorry I meant BLOCKCACHE => 'false' on the one column family we don't want
> getting cached.
>
> Cheers.
>
> ----
> Saad
>
>
> On Sun, Feb 18, 2018 at 6:51 PM, Saad Mufti <sa...@gmail.com> wrote:
>
>> Hi,
>>
>> We have an HBase system running HBase 1.3.1 on an AWS EMR service. Our
>> BucketCache is configured for 400 GB on a set of attached EBS disk volumes,
>> with all column families marked for in-memory in their column family
>> schemas using INMEMORY => 'true' (except for one column family we only ever
>> write to, so we set BUCKETCACHE => 'false' on that one).
>>
>> Even though all column families are marked INMEMORY, we have the following
>> ratios set:
>>
>> "hbase.bucketcache.memory.factor":"0.8",
>>
>> "hbase.bucketcache.single.factor":"0.1",
>>
>>
>> "hbase.bucketcache.multi.factor":"0.1",
>>
>> Currently the bucket cache shows evictions even though it has tons of free
>> space. I am trying to understand why we get any evictions at all? We do
>> have minor compactions going on, but we have not set hbase.rs.evictblocksonclose
>> to any value and from looking at the code, it defaults to false. The total
>> bucket cache size is nowhere near any of the above limits, in fact on some
>> long running servers where we stopped traffic, the cache size went down to
>> 0. Which makes me think something is evicting blocks from the bucket cache
>> in the background.
>>
>> You can see a screenshot from one of the regionserver L2 stats UI pages at
>> https://imgur.com/a/2ZUSv . Another interesting thing to me on this page
>> is that it has non-zero evicted blocks but says Evictions: 0
>>
>> Any help understanding this would be appreciated.
>>
>> ----
>> Saad
>>
>>

Re: Trying To Understand BucketCache Evictions In HBase 1.3.1

Posted by Saad Mufti <sa...@gmail.com>.
Sorry I meant BLOCKCACHE => 'false' on the one column family we don't want
getting cached.

Cheers.

----
Saad


On Sun, Feb 18, 2018 at 6:51 PM, Saad Mufti <sa...@gmail.com> wrote:

> Hi,
>
> We have an HBase system running HBase 1.3.1 on an AWS EMR service. Our
> BucketCache is configured for 400 GB on a set of attached EBS disk volumes,
> with all column families marked for in-memory in their column family
> schemas using INMEMORY => 'true' (except for one column family we only ever
> write to, so we set BUCKETCACHE => 'false' on that one).
>
> Even though all column families are marked INMEMORY, we have the following
> ratios set:
>
> "hbase.bucketcache.memory.factor":"0.8",
>
> "hbase.bucketcache.single.factor":"0.1",
>
>
> "hbase.bucketcache.multi.factor":"0.1",
>
> Currently the bucket cache shows evictions even though it has tons of free
> space. I am trying to understand why we get any evictions at all? We do
> have minor compactions going on, but we have not set hbase.rs.evictblocksonclose
> to any value and from looking at the code, it defaults to false. The total
> bucket cache size is nowhere near any of the above limits, in fact on some
> long running servers where we stopped traffic, the cache size went down to
> 0. Which makes me think something is evicting blocks from the bucket cache
> in the background.
>
> You can see a screenshot from one of the regionserver L2 stats UI pages at
> https://imgur.com/a/2ZUSv . Another interesting thing to me on this page
> is that it has non-zero evicted blocks but says Evictions: 0
>
> Any help understanding this would be appreciated.
>
> ----
> Saad
>
>