You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Artem Ervits (JIRA)" <ji...@apache.org> on 2018/11/30 20:18:00 UTC

[jira] [Updated] (HBASE-21536) Fix completebulkload usage instructions

     [ https://issues.apache.org/jira/browse/HBASE-21536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Artem Ervits updated HBASE-21536:
---------------------------------
    Description: 
Usage information upon invoking LoadIncrementalHFiles is misleading and error-prone.
{code:java}
usage: completebulkload /path/to/hfileoutputformat-output tablename -loadTable
-Dcreate.table=no - can be used to avoid creation of table by this tool
Note: if you set this to 'no', then the target table must already exist in HBase
-loadTable implies your baseDirectory to store file has a depth of 3 ,you must have an existing table
-Dignore.unmatched.families=yes - can be used to ignore unmatched column families{code}
in case of invoking the class via hbase command, completebulkload argument is unnecessary and only required via hadoop jar invocation. This is also an attempt to clarify where <-loadTable> and <-Dargs> arguments go on the command line.

Furthermore, since LoadIncrementalHFiles was recently moved out of hbase-server.jar to hbase-mapreduce, updating ref guide to demonstrate as such.

  was:
Usage information upon invoking LoadIncrementalHFiles is misleading and error-prone.
{code:java}
usage: completebulkload /path/to/hfileoutputformat-output tablename -loadTable
-Dcreate.table=no - can be used to avoid creation of table by this tool
Note: if you set this to 'no', then the target table must already exist in HBase
-loadTable implies your baseDirectory to store file has a depth of 3 ,you must have an existing table
-Dignore.unmatched.families=yes - can be used to ignore unmatched column families{code}
in case of invoking the class via hbase command, completebulkload argument is unnecessary and only required via hadoop jar invocation. This is also an attempt to clarify where <-loadTable> and <-Dargs> arguments go on the command line.


> Fix completebulkload usage instructions
> ---------------------------------------
>
>                 Key: HBASE-21536
>                 URL: https://issues.apache.org/jira/browse/HBASE-21536
>             Project: HBase
>          Issue Type: Task
>          Components: documentation, mapreduce
>            Reporter: Artem Ervits
>            Assignee: Artem Ervits
>            Priority: Trivial
>         Attachments: HBASE-21536.v01.patch
>
>
> Usage information upon invoking LoadIncrementalHFiles is misleading and error-prone.
> {code:java}
> usage: completebulkload /path/to/hfileoutputformat-output tablename -loadTable
> -Dcreate.table=no - can be used to avoid creation of table by this tool
> Note: if you set this to 'no', then the target table must already exist in HBase
> -loadTable implies your baseDirectory to store file has a depth of 3 ,you must have an existing table
> -Dignore.unmatched.families=yes - can be used to ignore unmatched column families{code}
> in case of invoking the class via hbase command, completebulkload argument is unnecessary and only required via hadoop jar invocation. This is also an attempt to clarify where <-loadTable> and <-Dargs> arguments go on the command line.
> Furthermore, since LoadIncrementalHFiles was recently moved out of hbase-server.jar to hbase-mapreduce, updating ref guide to demonstrate as such.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)