You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Ashish Singhi (JIRA)" <ji...@apache.org> on 2017/01/05 13:40:58 UTC

[jira] [Comment Edited] (HBASE-14061) Support CF-level Storage Policy

    [ https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801374#comment-15801374 ] 

Ashish Singhi edited comment on HBASE-14061 at 1/5/17 1:40 PM:
---------------------------------------------------------------

{code}
/**
1280	   * Return the encryption algorithm in use by this family
1281	   * <p/>
1282	   * Not using {@code enum} here because HDFS is not using {@code enum} for storage policy, see
1283	   * org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite for more details
1284	   */
1285	  public String getStoragePolicy() {
1286	    return getValue(STORAGE_POLICY);
1287	  }

 /**
1290	   * Set the encryption algorithm for use with this family
1291	   * @param policy
1292	   */
1293	  public HColumnDescriptor setStoragePolicy(String policy) {
1294	    setValue(STORAGE_POLICY, policy);
1295	    return this;
1296	  }
{code}
That javadoc is for HCD#get and setEncryptionType, need to correct it.
Otheriswe LGTM.


was (Author: ashish singhi):
{code}
/**
1280	   * Return the encryption algorithm in use by this family
1281	   * <p/>
1282	   * Not using {@code enum} here because HDFS is not using {@code enum} for storage policy, see
1283	   * org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite for more details
1284	   */
1285	  public String getStoragePolicy() {
1286	    return getValue(STORAGE_POLICY);
1287	  }

 /**
1290	   * Set the encryption algorithm for use with this family
1291	   * @param policy
1292	   */
1293	  public HColumnDescriptor setStoragePolicy(String policy) {
1294	    setValue(STORAGE_POLICY, policy);
1295	    return this;
1296	  }
{code}
That javadoc is for HCD#getEncryptionType, need to correct it.
Otheriswe LGTM.

> Support CF-level Storage Policy
> -------------------------------
>
>                 Key: HBASE-14061
>                 URL: https://issues.apache.org/jira/browse/HBASE-14061
>             Project: HBase
>          Issue Type: Sub-task
>          Components: HFile, regionserver
>         Environment: hadoop-2.6.0
>            Reporter: Victor Xu
>            Assignee: Yu Li
>         Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch, HBASE-14061.v3.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a configured directory, so I had to make sub directories(for each cf) in region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because dfs.getStoragePolicy cannot be easily written in reflection, and I needed this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)