You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "stack (JIRA)" <ji...@apache.org> on 2018/03/02 01:11:00 UTC

[jira] [Updated] (HBASE-11900) Optimization for incremental load reducer

     [ https://issues.apache.org/jira/browse/HBASE-11900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack updated HBASE-11900:
--------------------------
    Fix Version/s:     (was: 2.0.0)

> Optimization for incremental load reducer
> -----------------------------------------
>
>                 Key: HBASE-11900
>                 URL: https://issues.apache.org/jira/browse/HBASE-11900
>             Project: HBase
>          Issue Type: Improvement
>          Components: HFile, mapreduce
>    Affects Versions: 0.98.6
>            Reporter: @deprecated Yi Deng
>            Priority: Minor
>
> In current implementation, the key of reducer configured by HFileOutputFormat.configureIncrementalLoad, is row. So, the reducer has to do an in-memory sort before writing key values to the disk.  When we meet with some rows with a huge number of columns/versions, there could be OOM.
> A better way is:
> Use the KeyValue as the key, value can be a NullWritable. Partitioner partitions the KeyValue only by it's row part. Set a sort comparator that sort KeyValue with KeyValue.COMPARATOR



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)