You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Artem Ervits (JIRA)" <ji...@apache.org> on 2019/06/05 20:47:00 UTC
[jira] [Updated] (HBASE-21536) Fix completebulkload usage
instructions
[ https://issues.apache.org/jira/browse/HBASE-21536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Artem Ervits updated HBASE-21536:
---------------------------------
Fix Version/s: 2.1.6
> Fix completebulkload usage instructions
> ---------------------------------------
>
> Key: HBASE-21536
> URL: https://issues.apache.org/jira/browse/HBASE-21536
> Project: HBase
> Issue Type: Task
> Components: documentation, mapreduce
> Reporter: Artem Ervits
> Assignee: Artem Ervits
> Priority: Trivial
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.6
>
> Attachments: HBASE-21536.v01.patch, HBASE-21536.v02.patch
>
>
> Usage information upon invoking LoadIncrementalHFiles is misleading and error-prone.
> {code:java}
> usage: completebulkload /path/to/hfileoutputformat-output tablename -loadTable
> -Dcreate.table=no - can be used to avoid creation of table by this tool
> Note: if you set this to 'no', then the target table must already exist in HBase
> -loadTable implies your baseDirectory to store file has a depth of 3 ,you must have an existing table
> -Dignore.unmatched.families=yes - can be used to ignore unmatched column families{code}
> in case of invoking the class via hbase command, completebulkload argument is unnecessary and only required via hadoop jar invocation. This is also an attempt to clarify where <-loadTable> and <-Dargs> arguments go on the command line.
> Furthermore, since LoadIncrementalHFiles was recently moved out of hbase-server.jar to hbase-mapreduce, updating ref guide to demonstrate as such.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)