You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Bryan Beaudreault (Jira)" <ji...@apache.org> on 2022/07/19 13:44:00 UTC
[jira] [Commented] (HBASE-27224) HFile tool statistic sampling produces misleading results
[ https://issues.apache.org/jira/browse/HBASE-27224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17568571#comment-17568571 ]
Bryan Beaudreault commented on HBASE-27224:
-------------------------------------------
I actually wonder if we should use our own MutableSizeHistogram instead, which would additionally give us histogram bucket counts which could show us how many rows are > some threshold.
> HFile tool statistic sampling produces misleading results
> ---------------------------------------------------------
>
> Key: HBASE-27224
> URL: https://issues.apache.org/jira/browse/HBASE-27224
> Project: HBase
> Issue Type: Improvement
> Reporter: Bryan Beaudreault
> Priority: Major
>
> HFile tool uses codahale metrics for collecting statistics about key/values in an HFile. We recently had a case where the statistics printed out that the max row size was only 25k. This was confusing because I was seeing bucket cache allocation failures for blocks as large as 1.5mb.
> Digging in, I was able to find the large row using the "-p" argument (which was obviously very verbose). Once I found the row, I saw the vlen was listed as ~1.5mb which made much more sense.
> First thing I notice here is that default codahale metrics histogram is using ExponentiallyDecayingReservoir. This probably makes sense for a long-lived histogram, but the HFile tool is run at a point in time. It might be best to use UniformReservoir instead.
> Secondly, we do not need sampling for min/max. Let's supplement the histogram with our own calculation which is guaranteed to be accurate for the entirety of the file.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)