You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "ryan rawson (JIRA)" <ji...@apache.org> on 2010/02/03 01:09:18 UTC

[jira] Commented: (HBASE-1956) Export HDFS read and write latency as a metric

    [ https://issues.apache.org/jira/browse/HBASE-1956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12828852#action_12828852 ] 

ryan rawson commented on HBASE-1956:
------------------------------------

I'm using the file context and I don't seem to get the count reset. So the average is over all time, not on a heartbeat interval (10s in the config).

Is this expected?

According to docs on the 'net about volatile, it is not atomic, so maybe im always seeing a race condition and the counter is never reset.

> Export HDFS read and write latency as a metric
> ----------------------------------------------
>
>                 Key: HBASE-1956
>                 URL: https://issues.apache.org/jira/browse/HBASE-1956
>             Project: Hadoop HBase
>          Issue Type: Improvement
>            Reporter: Andrew Purtell
>            Assignee: Andrew Purtell
>            Priority: Minor
>             Fix For: 0.20.3, 0.21.0
>
>         Attachments: HBASE-1956.patch, HBASE-1956.patch
>
>
> HDFS write latency spikes especially are an indicator of general cluster overloading. We see this where the WAL writer complains about writes taking > 1 second, sometimes > 4, etc.  If for example the average write latency over the monitoring period is exported as a metric, then this can feed into alerting for or automatic provisioning of additional cluster hardware. While we're at it, export read side metrics as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.