You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Lars Francke (JIRA)" <ji...@apache.org> on 2009/12/03 02:09:20 UTC

[jira] Commented: (HBASE-800) support HTD and HCD get/set attribute in shell, Thrift, and REST interfaces

    [ https://issues.apache.org/jira/browse/HBASE-800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12785107#action_12785107 ] 

Lars Francke commented on HBASE-800:
------------------------------------

Thrift will have modifyTable, getTableDescriptor and createTable with a TableDescriptor with the new Thrift API developed in HBASE-1744. This would solve a part of this ticket.

> support HTD and HCD get/set attribute in shell, Thrift, and REST interfaces
> ---------------------------------------------------------------------------
>
>                 Key: HBASE-800
>                 URL: https://issues.apache.org/jira/browse/HBASE-800
>             Project: Hadoop HBase
>          Issue Type: Improvement
>          Components: client, rest, thrift
>    Affects Versions: 0.2.1
>            Reporter: Andrew Purtell
>            Priority: Minor
>
> From Billy Pearson on hbase-users@
> Hey Andrew
> Do we have plans to include setMaxFileSize for the shell,thrift,rest?
> So non java users can change this as needed with out having to learn java.
> Billy
> "Andrew Purtell" <ap...@yahoo.com> wrote in message news:189371.9860.qm@web65516.mail.ac4.yahoo.com...
> > Hello David,
> >
> > Current trunk (upcoming 0.2.0) has support for per-table metadata. See
> > https://issues.apache.org/jira/browse/HBASE-42 and
> > https://issues.apache.org/jira/browse/HBASE-62.
> >
> > So maybe you can set the split threshold quite low for the table in
> > question?
> >
> > The default is 256MB (268435456), set globally for all tables in the HBase
> > configuration as "hbase.hregion.max.filesize". However it's reasonable to
> > set it as low as the DFS blocksize. The guidance for a typical HBase
> > installation is to set the DFS blocksize to 8MB (8388608), instead of the
> > default 64MB.
> >
> > At create time:
> >
> >  HTableDescriptor htd = new HTableDescriptor("foo");
> >  htd.setMaxFileSize(8388608);
> >  ...
> >  HBaseAdmin admin = new HBaseAdmin(hconf);
> >  admin.createTable(htd);
> >
> > If the table already exists:
> >
> >  HTable table = new HTable(hconf, "foo");
> >  admin.disableTable("foo");
> >  // make a read-write descriptor
> >  HTableDescriptor htd =
> >    new HTableDescriptor(table.getTableDescriptor());
> >  htd.setMaxFileSize(83388608);
> >  admin.modifyTableMeta("foo", htd);
> >  admin.enableTable("foo");
> >
> > Hope this helps,
> >
> >   - Andy
> >
> >> From: David Alves
> >> <dr...@criticalsoftware.com>
> >> Subject: Region Splits
> >> To: "hbase-user@hadoop.apache.org"
> >> <hb...@hadoop.apache.org>
> >> Date: Thursday, July 31, 2008, 6:06 AM
> > [...]
> >> I use hbase (amongst other things) to crawl some repos of infomation
> >> and util now I've been using the Nutch segment generation paradigm.
> >> I would very much like to skip the segment generation step using
> >> hbase as source and sink directly but in order to do that I would
> >> need to either allow more that one split to be generated for a
> >> single region or make the regions in this particular table split
> >> with much less entries than other tables.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.