You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Srinidhi Muppalla <sr...@trulia.com> on 2019/03/27 18:57:49 UTC

Debugging High I/O Wait

Hello,

We've noticed an issue in our HBase cluster where one of the region-servers has a spike in I/O wait associated with a spike in Load for that node. As a result, our request times to the cluster increase dramatically. Initially, we suspected that we were experiencing hotspotting, but even after temporarily blocking requests to the highest volume regions on that region-servers the issue persisted. Moreover, when looking at request counts to the regions on the region-server from the HBase UI, they were not particularly high and our own application level metrics on the requests we were making were not very high either. From looking at a thread dump of the region-server, it appears that our get and scan requests are getting stuck when trying to read from the blocks in our bucket cache leaving the threads in a 'runnable' state. For context, we are running HBase 1.30 on a cluster backed by S3 running on EMR and our bucket cache is running in File mode. Our region-servers all have SSDs. We have a combined cache with the L1 standard LRU cache and the L2 file mode bucket cache. Our Bucket Cache utilization is less than 50% of the allocated space.

We suspect that part of the issue is our disk space utilization on the region-server as our max disk space utilization also increased as this happened. What things can we do to minimize disk space utilization? The actual HFiles are on S3 -- only the cache, application logs, and write ahead logs are on the region-servers. Other than the disk space utilization, what factors could cause high I/O wait in HBase and is there anything we can do to minimize it?

Right now, the only thing that works is terminating and recreating the cluster (which we can do safely because it's S3 backed).

Thanks!
Srinidhi

Re: Debugging High I/O Wait

Posted by ramkrishna vasudevan <ra...@gmail.com>.
Hi Srinidhi

Thanks for the details. As Stack said, can you get a thread dump, i/o stats
while this issue happens. You can compare it with the case when the RS is
in good shape.

If at all the SSD writes and reads is the reason for the Bucket cache read
to perform slower then it might be better to have seperate SSD. But lets
first check the dumps to know if that is the real reason.

Regards
Ram

On Fri, Mar 29, 2019 at 3:11 AM Stack <st...@duboce.net> wrote:

> Mind putting up a thread dump?
>
> How many spindles?
>
> If you compare the i/o stats between a good RS and a stuck one, how do they
> compare?
>
> Thanks,
> S
>
>
> On Wed, Mar 27, 2019 at 11:57 AM Srinidhi Muppalla <sr...@trulia.com>
> wrote:
>
> > Hello,
> >
> > We've noticed an issue in our HBase cluster where one of the
> > region-servers has a spike in I/O wait associated with a spike in Load
> for
> > that node. As a result, our request times to the cluster increase
> > dramatically. Initially, we suspected that we were experiencing
> > hotspotting, but even after temporarily blocking requests to the highest
> > volume regions on that region-servers the issue persisted. Moreover, when
> > looking at request counts to the regions on the region-server from the
> > HBase UI, they were not particularly high and our own application level
> > metrics on the requests we were making were not very high either. From
> > looking at a thread dump of the region-server, it appears that our get
> and
> > scan requests are getting stuck when trying to read from the blocks in
> our
> > bucket cache leaving the threads in a 'runnable' state. For context, we
> are
> > running HBase 1.30 on a cluster backed by S3 running on EMR and our
> bucket
> > cache is running in File mode. Our region-servers all have SSDs. We have
> a
> > combined cache with the L1 standard LRU cache and the L2 file mode bucket
> > cache. Our Bucket Cache utilization is less than 50% of the allocated
> space.
> >
> > We suspect that part of the issue is our disk space utilization on the
> > region-server as our max disk space utilization also increased as this
> > happened. What things can we do to minimize disk space utilization? The
> > actual HFiles are on S3 -- only the cache, application logs, and write
> > ahead logs are on the region-servers. Other than the disk space
> > utilization, what factors could cause high I/O wait in HBase and is there
> > anything we can do to minimize it?
> >
> > Right now, the only thing that works is terminating and recreating the
> > cluster (which we can do safely because it's S3 backed).
> >
> > Thanks!
> > Srinidhi
> >
>

Re: Debugging High I/O Wait

Posted by ramkrishna vasudevan <ra...@gmail.com>.
Hi

I think you can try to take that fix into your version. IMO the SSD
fragmentation issue may also be due to the way the bucket allocator works.

Regards
Ram

On Thu, Apr 11, 2019 at 4:20 AM Srinidhi Muppalla <sr...@trulia.com>
wrote:

> Thanks for the suggestions! The total size of the bucket cache is 72.00
> GB. We generally have close to half of that used when the issue happens. We
> are using only one file path for the bucket cache. We will try using
> multiple paths and also adding an additional disk to our region-servers as
> suggested.
>
> When looking through the HBase Jira I came across this ticket --
> https://issues.apache.org/jira/browse/HBASE-16630 that affects the
> version of HBase that we are running. From what I can tell, this bug + fix
> looks like it only applies when the Bucket Cache is running in memory. Is
> there an equivalent bug + fix for a Bucket Cache running in file mode?
>
> Thanks,
> Srinidhi
>
> On 4/5/19, 5:14 AM, "Anoop John" <an...@gmail.com> wrote:
>
>     Hi Srinidhi
>                     You have File mode bucket cache.  What is the size of
> the
>     cache? You configure single file path for cache or 1+ paths?   If
> former,
>     splitting the cache into multiple files (paths can be given as comma
>     separated in the config) will help?
>
>     Anoop
>
>     On Fri, Apr 5, 2019 at 2:58 AM Srinidhi Muppalla <srinidhim@trulia.com
> >
>     wrote:
>
>     > After some more digging, I discovered that during the time that the
> RS is
>     > stuck the kernel message buffer outputted only this message
>     >
>     > "[1031214.108110] XFS: java(6522) possible memory allocation
> deadlock size
>     > 32944 in kmem_alloc (mode:0x2400240)"
>     >
>     > From my reading online, the cause of this error appears to generally
> be
>     > excessive memory and file fragmentation. We haven't changed the mslab
>     > config and we are running HBase 1.3.0 so it should be running by
> default.
>     > The issue tends to arise consistently and regularly (every 10 or so
> days)
>     > and once one node is affected other nodes start to follow after a few
>     > hours. What could be causing this to happen and is there any way to
> prevent
>     > or minimize fragmentation?
>     >
>     > Best,
>     > Srinidhi
>     >
>     > On 3/29/19, 11:02 AM, "Srinidhi Muppalla" <sr...@trulia.com>
> wrote:
>     >
>     >     Stack and Ram,
>     >
>     >     Attached the thread dumps. 'Jstack normal' is the normal node.
> 'Jstack
>     > problematic' was taken when the node was stuck.
>     >
>     >     We don't have full I/O stats for the problematic node.
> Unfortunately,
>     > it was impacting production so we had to recreate the cluster as
> soon as
>     > possible and couldn't get full data. I attached the dashboards with
> the
>     > wait I/O and other CPU stats. Thanks for helping look into the issue!
>     >
>     >     Best,
>     >     Srinidhi
>     >
>     >
>     >
>     >     On 3/28/19, 2:41 PM, "Stack" <st...@duboce.net> wrote:
>     >
>     >         Mind putting up a thread dump?
>     >
>     >         How many spindles?
>     >
>     >         If you compare the i/o stats between a good RS and a stuck
> one,
>     > how do they
>     >         compare?
>     >
>     >         Thanks,
>     >         S
>     >
>     >
>     >         On Wed, Mar 27, 2019 at 11:57 AM Srinidhi Muppalla <
>     > srinidhim@trulia.com>
>     >         wrote:
>     >
>     >         > Hello,
>     >         >
>     >         > We've noticed an issue in our HBase cluster where one of
> the
>     >         > region-servers has a spike in I/O wait associated with a
> spike
>     > in Load for
>     >         > that node. As a result, our request times to the cluster
> increase
>     >         > dramatically. Initially, we suspected that we were
> experiencing
>     >         > hotspotting, but even after temporarily blocking requests
> to the
>     > highest
>     >         > volume regions on that region-servers the issue persisted.
>     > Moreover, when
>     >         > looking at request counts to the regions on the
> region-server
>     > from the
>     >         > HBase UI, they were not particularly high and our own
>     > application level
>     >         > metrics on the requests we were making were not very high
>     > either. From
>     >         > looking at a thread dump of the region-server, it appears
> that
>     > our get and
>     >         > scan requests are getting stuck when trying to read from
> the
>     > blocks in our
>     >         > bucket cache leaving the threads in a 'runnable' state. For
>     > context, we are
>     >         > running HBase 1.30 on a cluster backed by S3 running on
> EMR and
>     > our bucket
>     >         > cache is running in File mode. Our region-servers all have
> SSDs.
>     > We have a
>     >         > combined cache with the L1 standard LRU cache and the L2
> file
>     > mode bucket
>     >         > cache. Our Bucket Cache utilization is less than 50% of the
>     > allocated space.
>     >         >
>     >         > We suspect that part of the issue is our disk space
> utilization
>     > on the
>     >         > region-server as our max disk space utilization also
> increased
>     > as this
>     >         > happened. What things can we do to minimize disk space
>     > utilization? The
>     >         > actual HFiles are on S3 -- only the cache, application
> logs, and
>     > write
>     >         > ahead logs are on the region-servers. Other than the disk
> space
>     >         > utilization, what factors could cause high I/O wait in
> HBase and
>     > is there
>     >         > anything we can do to minimize it?
>     >         >
>     >         > Right now, the only thing that works is terminating and
>     > recreating the
>     >         > cluster (which we can do safely because it's S3 backed).
>     >         >
>     >         > Thanks!
>     >         > Srinidhi
>     >         >
>     >
>     >
>     >
>     >
>     >
>
>
>

Re: Debugging High I/O Wait

Posted by Srinidhi Muppalla <sr...@trulia.com>.
Thanks for the suggestions! The total size of the bucket cache is 72.00 GB. We generally have close to half of that used when the issue happens. We are using only one file path for the bucket cache. We will try using multiple paths and also adding an additional disk to our region-servers as suggested. 

When looking through the HBase Jira I came across this ticket -- https://issues.apache.org/jira/browse/HBASE-16630 that affects the version of HBase that we are running. From what I can tell, this bug + fix looks like it only applies when the Bucket Cache is running in memory. Is there an equivalent bug + fix for a Bucket Cache running in file mode? 

Thanks,
Srinidhi

On 4/5/19, 5:14 AM, "Anoop John" <an...@gmail.com> wrote:

    Hi Srinidhi
                    You have File mode bucket cache.  What is the size of the
    cache? You configure single file path for cache or 1+ paths?   If former,
    splitting the cache into multiple files (paths can be given as comma
    separated in the config) will help?
    
    Anoop
    
    On Fri, Apr 5, 2019 at 2:58 AM Srinidhi Muppalla <sr...@trulia.com>
    wrote:
    
    > After some more digging, I discovered that during the time that the RS is
    > stuck the kernel message buffer outputted only this message
    >
    > "[1031214.108110] XFS: java(6522) possible memory allocation deadlock size
    > 32944 in kmem_alloc (mode:0x2400240)"
    >
    > From my reading online, the cause of this error appears to generally be
    > excessive memory and file fragmentation. We haven't changed the mslab
    > config and we are running HBase 1.3.0 so it should be running by default.
    > The issue tends to arise consistently and regularly (every 10 or so days)
    > and once one node is affected other nodes start to follow after a few
    > hours. What could be causing this to happen and is there any way to prevent
    > or minimize fragmentation?
    >
    > Best,
    > Srinidhi
    >
    > On 3/29/19, 11:02 AM, "Srinidhi Muppalla" <sr...@trulia.com> wrote:
    >
    >     Stack and Ram,
    >
    >     Attached the thread dumps. 'Jstack normal' is the normal node. 'Jstack
    > problematic' was taken when the node was stuck.
    >
    >     We don't have full I/O stats for the problematic node. Unfortunately,
    > it was impacting production so we had to recreate the cluster as soon as
    > possible and couldn't get full data. I attached the dashboards with the
    > wait I/O and other CPU stats. Thanks for helping look into the issue!
    >
    >     Best,
    >     Srinidhi
    >
    >
    >
    >     On 3/28/19, 2:41 PM, "Stack" <st...@duboce.net> wrote:
    >
    >         Mind putting up a thread dump?
    >
    >         How many spindles?
    >
    >         If you compare the i/o stats between a good RS and a stuck one,
    > how do they
    >         compare?
    >
    >         Thanks,
    >         S
    >
    >
    >         On Wed, Mar 27, 2019 at 11:57 AM Srinidhi Muppalla <
    > srinidhim@trulia.com>
    >         wrote:
    >
    >         > Hello,
    >         >
    >         > We've noticed an issue in our HBase cluster where one of the
    >         > region-servers has a spike in I/O wait associated with a spike
    > in Load for
    >         > that node. As a result, our request times to the cluster increase
    >         > dramatically. Initially, we suspected that we were experiencing
    >         > hotspotting, but even after temporarily blocking requests to the
    > highest
    >         > volume regions on that region-servers the issue persisted.
    > Moreover, when
    >         > looking at request counts to the regions on the region-server
    > from the
    >         > HBase UI, they were not particularly high and our own
    > application level
    >         > metrics on the requests we were making were not very high
    > either. From
    >         > looking at a thread dump of the region-server, it appears that
    > our get and
    >         > scan requests are getting stuck when trying to read from the
    > blocks in our
    >         > bucket cache leaving the threads in a 'runnable' state. For
    > context, we are
    >         > running HBase 1.30 on a cluster backed by S3 running on EMR and
    > our bucket
    >         > cache is running in File mode. Our region-servers all have SSDs.
    > We have a
    >         > combined cache with the L1 standard LRU cache and the L2 file
    > mode bucket
    >         > cache. Our Bucket Cache utilization is less than 50% of the
    > allocated space.
    >         >
    >         > We suspect that part of the issue is our disk space utilization
    > on the
    >         > region-server as our max disk space utilization also increased
    > as this
    >         > happened. What things can we do to minimize disk space
    > utilization? The
    >         > actual HFiles are on S3 -- only the cache, application logs, and
    > write
    >         > ahead logs are on the region-servers. Other than the disk space
    >         > utilization, what factors could cause high I/O wait in HBase and
    > is there
    >         > anything we can do to minimize it?
    >         >
    >         > Right now, the only thing that works is terminating and
    > recreating the
    >         > cluster (which we can do safely because it's S3 backed).
    >         >
    >         > Thanks!
    >         > Srinidhi
    >         >
    >
    >
    >
    >
    >
    


Re: Debugging High I/O Wait

Posted by Anoop John <an...@gmail.com>.
Hi Srinidhi
                You have File mode bucket cache.  What is the size of the
cache? You configure single file path for cache or 1+ paths?   If former,
splitting the cache into multiple files (paths can be given as comma
separated in the config) will help?

Anoop

On Fri, Apr 5, 2019 at 2:58 AM Srinidhi Muppalla <sr...@trulia.com>
wrote:

> After some more digging, I discovered that during the time that the RS is
> stuck the kernel message buffer outputted only this message
>
> "[1031214.108110] XFS: java(6522) possible memory allocation deadlock size
> 32944 in kmem_alloc (mode:0x2400240)"
>
> From my reading online, the cause of this error appears to generally be
> excessive memory and file fragmentation. We haven't changed the mslab
> config and we are running HBase 1.3.0 so it should be running by default.
> The issue tends to arise consistently and regularly (every 10 or so days)
> and once one node is affected other nodes start to follow after a few
> hours. What could be causing this to happen and is there any way to prevent
> or minimize fragmentation?
>
> Best,
> Srinidhi
>
> On 3/29/19, 11:02 AM, "Srinidhi Muppalla" <sr...@trulia.com> wrote:
>
>     Stack and Ram,
>
>     Attached the thread dumps. 'Jstack normal' is the normal node. 'Jstack
> problematic' was taken when the node was stuck.
>
>     We don't have full I/O stats for the problematic node. Unfortunately,
> it was impacting production so we had to recreate the cluster as soon as
> possible and couldn't get full data. I attached the dashboards with the
> wait I/O and other CPU stats. Thanks for helping look into the issue!
>
>     Best,
>     Srinidhi
>
>
>
>     On 3/28/19, 2:41 PM, "Stack" <st...@duboce.net> wrote:
>
>         Mind putting up a thread dump?
>
>         How many spindles?
>
>         If you compare the i/o stats between a good RS and a stuck one,
> how do they
>         compare?
>
>         Thanks,
>         S
>
>
>         On Wed, Mar 27, 2019 at 11:57 AM Srinidhi Muppalla <
> srinidhim@trulia.com>
>         wrote:
>
>         > Hello,
>         >
>         > We've noticed an issue in our HBase cluster where one of the
>         > region-servers has a spike in I/O wait associated with a spike
> in Load for
>         > that node. As a result, our request times to the cluster increase
>         > dramatically. Initially, we suspected that we were experiencing
>         > hotspotting, but even after temporarily blocking requests to the
> highest
>         > volume regions on that region-servers the issue persisted.
> Moreover, when
>         > looking at request counts to the regions on the region-server
> from the
>         > HBase UI, they were not particularly high and our own
> application level
>         > metrics on the requests we were making were not very high
> either. From
>         > looking at a thread dump of the region-server, it appears that
> our get and
>         > scan requests are getting stuck when trying to read from the
> blocks in our
>         > bucket cache leaving the threads in a 'runnable' state. For
> context, we are
>         > running HBase 1.30 on a cluster backed by S3 running on EMR and
> our bucket
>         > cache is running in File mode. Our region-servers all have SSDs.
> We have a
>         > combined cache with the L1 standard LRU cache and the L2 file
> mode bucket
>         > cache. Our Bucket Cache utilization is less than 50% of the
> allocated space.
>         >
>         > We suspect that part of the issue is our disk space utilization
> on the
>         > region-server as our max disk space utilization also increased
> as this
>         > happened. What things can we do to minimize disk space
> utilization? The
>         > actual HFiles are on S3 -- only the cache, application logs, and
> write
>         > ahead logs are on the region-servers. Other than the disk space
>         > utilization, what factors could cause high I/O wait in HBase and
> is there
>         > anything we can do to minimize it?
>         >
>         > Right now, the only thing that works is terminating and
> recreating the
>         > cluster (which we can do safely because it's S3 backed).
>         >
>         > Thanks!
>         > Srinidhi
>         >
>
>
>
>
>

Re: Debugging High I/O Wait

Posted by ramkrishna vasudevan <ra...@gmail.com>.
Hi Srinidhi

Am not able to view the attachments for some reason. How ever as Anoop
suggested can you try multi paths for the bucket cache. As said in the
first email - a separate SSD for WAL writes and multiple (more than one
file path) for the bucket cache SSD may help. Here again the mutiple paths
can also be on multiple devices.

REgards
Ram

On Fri, Apr 5, 2019 at 2:58 AM Srinidhi Muppalla <sr...@trulia.com>
wrote:

> After some more digging, I discovered that during the time that the RS is
> stuck the kernel message buffer outputted only this message
>
> "[1031214.108110] XFS: java(6522) possible memory allocation deadlock size
> 32944 in kmem_alloc (mode:0x2400240)"
>
> From my reading online, the cause of this error appears to generally be
> excessive memory and file fragmentation. We haven't changed the mslab
> config and we are running HBase 1.3.0 so it should be running by default.
> The issue tends to arise consistently and regularly (every 10 or so days)
> and once one node is affected other nodes start to follow after a few
> hours. What could be causing this to happen and is there any way to prevent
> or minimize fragmentation?
>
> Best,
> Srinidhi
>
> On 3/29/19, 11:02 AM, "Srinidhi Muppalla" <sr...@trulia.com> wrote:
>
>     Stack and Ram,
>
>     Attached the thread dumps. 'Jstack normal' is the normal node. 'Jstack
> problematic' was taken when the node was stuck.
>
>     We don't have full I/O stats for the problematic node. Unfortunately,
> it was impacting production so we had to recreate the cluster as soon as
> possible and couldn't get full data. I attached the dashboards with the
> wait I/O and other CPU stats. Thanks for helping look into the issue!
>
>     Best,
>     Srinidhi
>
>
>
>     On 3/28/19, 2:41 PM, "Stack" <st...@duboce.net> wrote:
>
>         Mind putting up a thread dump?
>
>         How many spindles?
>
>         If you compare the i/o stats between a good RS and a stuck one,
> how do they
>         compare?
>
>         Thanks,
>         S
>
>
>         On Wed, Mar 27, 2019 at 11:57 AM Srinidhi Muppalla <
> srinidhim@trulia.com>
>         wrote:
>
>         > Hello,
>         >
>         > We've noticed an issue in our HBase cluster where one of the
>         > region-servers has a spike in I/O wait associated with a spike
> in Load for
>         > that node. As a result, our request times to the cluster increase
>         > dramatically. Initially, we suspected that we were experiencing
>         > hotspotting, but even after temporarily blocking requests to the
> highest
>         > volume regions on that region-servers the issue persisted.
> Moreover, when
>         > looking at request counts to the regions on the region-server
> from the
>         > HBase UI, they were not particularly high and our own
> application level
>         > metrics on the requests we were making were not very high
> either. From
>         > looking at a thread dump of the region-server, it appears that
> our get and
>         > scan requests are getting stuck when trying to read from the
> blocks in our
>         > bucket cache leaving the threads in a 'runnable' state. For
> context, we are
>         > running HBase 1.30 on a cluster backed by S3 running on EMR and
> our bucket
>         > cache is running in File mode. Our region-servers all have SSDs.
> We have a
>         > combined cache with the L1 standard LRU cache and the L2 file
> mode bucket
>         > cache. Our Bucket Cache utilization is less than 50% of the
> allocated space.
>         >
>         > We suspect that part of the issue is our disk space utilization
> on the
>         > region-server as our max disk space utilization also increased
> as this
>         > happened. What things can we do to minimize disk space
> utilization? The
>         > actual HFiles are on S3 -- only the cache, application logs, and
> write
>         > ahead logs are on the region-servers. Other than the disk space
>         > utilization, what factors could cause high I/O wait in HBase and
> is there
>         > anything we can do to minimize it?
>         >
>         > Right now, the only thing that works is terminating and
> recreating the
>         > cluster (which we can do safely because it's S3 backed).
>         >
>         > Thanks!
>         > Srinidhi
>         >
>
>
>
>
>

Re: Debugging High I/O Wait

Posted by Srinidhi Muppalla <sr...@trulia.com>.
After some more digging, I discovered that during the time that the RS is stuck the kernel message buffer outputted only this message 

"[1031214.108110] XFS: java(6522) possible memory allocation deadlock size 32944 in kmem_alloc (mode:0x2400240)"

From my reading online, the cause of this error appears to generally be excessive memory and file fragmentation. We haven't changed the mslab config and we are running HBase 1.3.0 so it should be running by default. The issue tends to arise consistently and regularly (every 10 or so days) and once one node is affected other nodes start to follow after a few hours. What could be causing this to happen and is there any way to prevent or minimize fragmentation?

Best,
Srinidhi

On 3/29/19, 11:02 AM, "Srinidhi Muppalla" <sr...@trulia.com> wrote:

    Stack and Ram, 
    
    Attached the thread dumps. 'Jstack normal' is the normal node. 'Jstack problematic' was taken when the node was stuck. 
    
    We don't have full I/O stats for the problematic node. Unfortunately, it was impacting production so we had to recreate the cluster as soon as possible and couldn't get full data. I attached the dashboards with the wait I/O and other CPU stats. Thanks for helping look into the issue!
    
    Best,
    Srinidhi
    
    
    
    On 3/28/19, 2:41 PM, "Stack" <st...@duboce.net> wrote:
    
        Mind putting up a thread dump?
        
        How many spindles?
        
        If you compare the i/o stats between a good RS and a stuck one, how do they
        compare?
        
        Thanks,
        S
        
        
        On Wed, Mar 27, 2019 at 11:57 AM Srinidhi Muppalla <sr...@trulia.com>
        wrote:
        
        > Hello,
        >
        > We've noticed an issue in our HBase cluster where one of the
        > region-servers has a spike in I/O wait associated with a spike in Load for
        > that node. As a result, our request times to the cluster increase
        > dramatically. Initially, we suspected that we were experiencing
        > hotspotting, but even after temporarily blocking requests to the highest
        > volume regions on that region-servers the issue persisted. Moreover, when
        > looking at request counts to the regions on the region-server from the
        > HBase UI, they were not particularly high and our own application level
        > metrics on the requests we were making were not very high either. From
        > looking at a thread dump of the region-server, it appears that our get and
        > scan requests are getting stuck when trying to read from the blocks in our
        > bucket cache leaving the threads in a 'runnable' state. For context, we are
        > running HBase 1.30 on a cluster backed by S3 running on EMR and our bucket
        > cache is running in File mode. Our region-servers all have SSDs. We have a
        > combined cache with the L1 standard LRU cache and the L2 file mode bucket
        > cache. Our Bucket Cache utilization is less than 50% of the allocated space.
        >
        > We suspect that part of the issue is our disk space utilization on the
        > region-server as our max disk space utilization also increased as this
        > happened. What things can we do to minimize disk space utilization? The
        > actual HFiles are on S3 -- only the cache, application logs, and write
        > ahead logs are on the region-servers. Other than the disk space
        > utilization, what factors could cause high I/O wait in HBase and is there
        > anything we can do to minimize it?
        >
        > Right now, the only thing that works is terminating and recreating the
        > cluster (which we can do safely because it's S3 backed).
        >
        > Thanks!
        > Srinidhi
        >
        
    
    


Re: Debugging High I/O Wait

Posted by Stack <st...@duboce.net>.
Mind putting up a thread dump?

How many spindles?

If you compare the i/o stats between a good RS and a stuck one, how do they
compare?

Thanks,
S


On Wed, Mar 27, 2019 at 11:57 AM Srinidhi Muppalla <sr...@trulia.com>
wrote:

> Hello,
>
> We've noticed an issue in our HBase cluster where one of the
> region-servers has a spike in I/O wait associated with a spike in Load for
> that node. As a result, our request times to the cluster increase
> dramatically. Initially, we suspected that we were experiencing
> hotspotting, but even after temporarily blocking requests to the highest
> volume regions on that region-servers the issue persisted. Moreover, when
> looking at request counts to the regions on the region-server from the
> HBase UI, they were not particularly high and our own application level
> metrics on the requests we were making were not very high either. From
> looking at a thread dump of the region-server, it appears that our get and
> scan requests are getting stuck when trying to read from the blocks in our
> bucket cache leaving the threads in a 'runnable' state. For context, we are
> running HBase 1.30 on a cluster backed by S3 running on EMR and our bucket
> cache is running in File mode. Our region-servers all have SSDs. We have a
> combined cache with the L1 standard LRU cache and the L2 file mode bucket
> cache. Our Bucket Cache utilization is less than 50% of the allocated space.
>
> We suspect that part of the issue is our disk space utilization on the
> region-server as our max disk space utilization also increased as this
> happened. What things can we do to minimize disk space utilization? The
> actual HFiles are on S3 -- only the cache, application logs, and write
> ahead logs are on the region-servers. Other than the disk space
> utilization, what factors could cause high I/O wait in HBase and is there
> anything we can do to minimize it?
>
> Right now, the only thing that works is terminating and recreating the
> cluster (which we can do safely because it's S3 backed).
>
> Thanks!
> Srinidhi
>

Re: Debugging High I/O Wait

Posted by Srinidhi Muppalla <sr...@trulia.com>.
They reside on the same SSD. Is it advisable to have separate volume for the WALs?

There are writes happening while reads are happening from the Bucket cache. 

I believe our LRU cache is big enough to hold all the index blocks. I don't have the exact numbers from the cluster when it last had the issue, but right now on our currently healthy cluster our region servers have 2.2 GB dedicated to the LRU cache. On average right now, each region server has a sum total of ~20MB for all its indexes. 

Thanks,
Srinidhi

On 3/27/19, 9:24 PM, "ramkrishna vasudevan" <ra...@gmail.com> wrote:

    Hi Srinidhi
    
    As you said the cache, WAL files are in the RS SSD drives. The cache and
    the WAL files reside on seperate SSDs or on the same SSD?
    
    Or there writes happening also while these reads happen from the Bucket
    cache?  Is your LRU cache big enough to hold all the index blocks?
    
    Regards
    Ram
    
    
    On Thu, Mar 28, 2019 at 12:27 AM Srinidhi Muppalla <sr...@trulia.com>
    wrote:
    
    > Hello,
    >
    > We've noticed an issue in our HBase cluster where one of the
    > region-servers has a spike in I/O wait associated with a spike in Load for
    > that node. As a result, our request times to the cluster increase
    > dramatically. Initially, we suspected that we were experiencing
    > hotspotting, but even after temporarily blocking requests to the highest
    > volume regions on that region-servers the issue persisted. Moreover, when
    > looking at request counts to the regions on the region-server from the
    > HBase UI, they were not particularly high and our own application level
    > metrics on the requests we were making were not very high either. From
    > looking at a thread dump of the region-server, it appears that our get and
    > scan requests are getting stuck when trying to read from the blocks in our
    > bucket cache leaving the threads in a 'runnable' state. For context, we are
    > running HBase 1.30 on a cluster backed by S3 running on EMR and our bucket
    > cache is running in File mode. Our region-servers all have SSDs. We have a
    > combined cache with the L1 standard LRU cache and the L2 file mode bucket
    > cache. Our Bucket Cache utilization is less than 50% of the allocated space.
    >
    > We suspect that part of the issue is our disk space utilization on the
    > region-server as our max disk space utilization also increased as this
    > happened. What things can we do to minimize disk space utilization? The
    > actual HFiles are on S3 -- only the cache, application logs, and write
    > ahead logs are on the region-servers. Other than the disk space
    > utilization, what factors could cause high I/O wait in HBase and is there
    > anything we can do to minimize it?
    >
    > Right now, the only thing that works is terminating and recreating the
    > cluster (which we can do safely because it's S3 backed).
    >
    > Thanks!
    > Srinidhi
    >
    


Re: Debugging High I/O Wait

Posted by ramkrishna vasudevan <ra...@gmail.com>.
Hi Srinidhi

As you said the cache, WAL files are in the RS SSD drives. The cache and
the WAL files reside on seperate SSDs or on the same SSD?

Or there writes happening also while these reads happen from the Bucket
cache?  Is your LRU cache big enough to hold all the index blocks?

Regards
Ram


On Thu, Mar 28, 2019 at 12:27 AM Srinidhi Muppalla <sr...@trulia.com>
wrote:

> Hello,
>
> We've noticed an issue in our HBase cluster where one of the
> region-servers has a spike in I/O wait associated with a spike in Load for
> that node. As a result, our request times to the cluster increase
> dramatically. Initially, we suspected that we were experiencing
> hotspotting, but even after temporarily blocking requests to the highest
> volume regions on that region-servers the issue persisted. Moreover, when
> looking at request counts to the regions on the region-server from the
> HBase UI, they were not particularly high and our own application level
> metrics on the requests we were making were not very high either. From
> looking at a thread dump of the region-server, it appears that our get and
> scan requests are getting stuck when trying to read from the blocks in our
> bucket cache leaving the threads in a 'runnable' state. For context, we are
> running HBase 1.30 on a cluster backed by S3 running on EMR and our bucket
> cache is running in File mode. Our region-servers all have SSDs. We have a
> combined cache with the L1 standard LRU cache and the L2 file mode bucket
> cache. Our Bucket Cache utilization is less than 50% of the allocated space.
>
> We suspect that part of the issue is our disk space utilization on the
> region-server as our max disk space utilization also increased as this
> happened. What things can we do to minimize disk space utilization? The
> actual HFiles are on S3 -- only the cache, application logs, and write
> ahead logs are on the region-servers. Other than the disk space
> utilization, what factors could cause high I/O wait in HBase and is there
> anything we can do to minimize it?
>
> Right now, the only thing that works is terminating and recreating the
> cluster (which we can do safely because it's S3 backed).
>
> Thanks!
> Srinidhi
>