You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Mikhail Zvagelsky (JIRA)" <ji...@apache.org> on 2016/11/21 11:04:58 UTC

[jira] [Commented] (HBASE-16935) deleteColumn/modifyTable don't delete all family's StoreFile from file system

    [ https://issues.apache.org/jira/browse/HBASE-16935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15683218#comment-15683218 ] 

Mikhail Zvagelsky commented on HBASE-16935:
-------------------------------------------

 Dear Matteo, thank you very much for the explanation!
Indeed if we flush memstore before column family deletion:
{code:|borderStyle=solid}
admin.flush(tableName);
admin.deleteColumn(tableName, Bytes.toBytes("cf2"));
{code}
the family's folder disappears from file system.
May be the deleteColumn() method can take another boolean parameter, 
e.g. "deleteFromMemory" which will indicate if memstore should be flushed
before removal of family's folder?
Thank you again!

> deleteColumn/modifyTable don't delete all family's StoreFile from file system
> -----------------------------------------------------------------------------
>
>                 Key: HBASE-16935
>                 URL: https://issues.apache.org/jira/browse/HBASE-16935
>             Project: HBase
>          Issue Type: New Feature
>          Components: Admin
>    Affects Versions: 1.2.3
>            Reporter: Mikhail Zvagelsky
>         Attachments: Selection_008.png
>
>
> The method deleteColumn(TableName tableName, byte[] columnName) of the class org.apache.hadoop.hbase.client.Admin shoud delete specified column family from specified table. (Despite of its name the method removes the family, not a column - view the [issue| https://issues.apache.org/jira/browse/HBASE-1989].)
> This method changes the table's schema, but it doesn't delete column family's Store File from a file system. To be precise - I run this code:
> {code:|borderStyle=solid}
> import java.io.IOException;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.TableName;
> import org.apache.hadoop.hbase.client.*;
> import org.apache.hadoop.hbase.util.Bytes;
> public class ToHBaseIssueTracker {
>     public static void main(String[] args) throws IOException {
>         TableName tableName = TableName.valueOf("test_table");
>         HTableDescriptor desc = new HTableDescriptor(tableName);
>         desc.addFamily(new HColumnDescriptor("cf1"));
>         desc.addFamily(new HColumnDescriptor("cf2"));
>         Configuration conf = HBaseConfiguration.create();
>         Connection connection = ConnectionFactory.createConnection(conf);
>         Admin admin = connection.getAdmin();
>         admin.createTable(desc);
>         HTable table = new HTable(conf, "test_table");
>         for (int i = 0; i < 4; i++) {
>             Put put = new Put(Bytes.toBytes(i)); // Use i as row key.
>             put.addColumn(Bytes.toBytes("cf1"), Bytes.toBytes("a"), Bytes.toBytes("value"));
>             put.addColumn(Bytes.toBytes("cf2"), Bytes.toBytes("a"), Bytes.toBytes("value"));
>             table.put(put);
>         }
>         admin.deleteColumn(tableName, Bytes.toBytes("cf2"));
>         admin.majorCompact(tableName);
>         admin.close();
>     }
> }
> {code}
> Then I see that the store file for the "cf2" family persists in file system.
> I observe this effect in standalone hbase installation and in pseudo-distributed mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)