You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Yu Li (JIRA)" <ji...@apache.org> on 2016/01/26 16:03:39 UTC

[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

    [ https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15117348#comment-15117348 ] 

Yu Li commented on HBASE-14061:
-------------------------------

After an offline discussion with [~victorunique], I'll take over this task (let me know if you changed your mind by any chance Victor:-))

Adding links to two related HDFS JIRAs, which resolve issues in HDFS layer in heterogeneous (some machine has SSD while the others don't) environment.

Will refine the patch according to review comments later.

> Support CF-level Storage Policy
> -------------------------------
>
>                 Key: HBASE-14061
>                 URL: https://issues.apache.org/jira/browse/HBASE-14061
>             Project: HBase
>          Issue Type: Sub-task
>          Components: HFile, regionserver
>         Environment: hadoop-2.6.0
>            Reporter: Victor Xu
>            Assignee: Victor Xu
>         Attachments: HBASE-14061-master-v1.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a configured directory, so I had to make sub directories(for each cf) in region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because dfs.getStoragePolicy cannot be easily written in reflection, and I needed this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)