You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Allen Wittenauer (JIRA)" <ji...@apache.org> on 2015/07/06 22:12:04 UTC
[jira] [Commented] (HADOOP-10979) Auto-entries in hadoop_usage
[ https://issues.apache.org/jira/browse/HADOOP-10979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14615562#comment-14615562 ]
Allen Wittenauer commented on HADOOP-10979:
-------------------------------------------
The new output:
hadoop:
{code}
aw-mbp-work:hadoop-3.0.0-SNAPSHOT aw$ bin/hadoop
Usage: hadoop [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
or hadoop [OPTIONS] CLASSNAME [CLASSNAME OPTIONS]
where CLASSNAME is a user-provided Java class
OPTIONS is none or any of:
--config confdir
--daemon (start|stop|status)
--debug
--hostnames list[,of,host,names]
--hosts filename
--loglevel loglevel
--slaves
SUBCOMMAND is one of:
archive create a Hadoop archive
checknative check native Hadoop and compression libraries availability
classpath prints the class path needed to get the Hadoop jar and the
required libraries
conftest validate configuration XML files
credential interact with credential providers
daemonlog get/set the log level for each daemon
distch distributed metadata changer
distcp copy file or directories recursively
fs run a generic filesystem user client
jar <jar> run a jar file. NOTE: please use "yarn jar" to launch YARN
applications, not this command.
jnipath prints the java.library.path
kerbname show auth_to_local principal conversion
key manage keys via the KeyProvider
trace view and modify Hadoop tracing settings
version print the version
Most subcommands print help when invoked w/o parameters or with -h.
{code}
hdfs:
{code}
aw-mbp-work:hadoop-3.0.0-SNAPSHOT aw$ bin/hdfs
Usage: hdfs [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
OPTIONS is none or any of:
--config confdir
--daemon (start|stop|status)
--debug
--hostnames list[,of,host,names]
--hosts filename
--loglevel loglevel
--slaves
SUBCOMMAND is one of:
balancer run a cluster balancing utility
cacheadmin configure the HDFS cache
classpath prints the class path needed to get the hadoop jar and
the required libraries
crypto configure HDFS encryption zones
datanode run a DFS datanode
debug run a Debug Admin to execute HDFS debug commands
dfs run a filesystem command on the file system
dfsadmin run a DFS admin client
fetchdt fetch a delegation token from the NameNode
fsck run a DFS filesystem checking utility
getconf get config values from configuration
groups get the groups which users belong to
haadmin run a DFS HA admin client
jmxget get JMX exported values from NameNode or DataNode.
journalnode run the DFS journalnode
lsSnapshottableDir list all snapshottable dirs owned by the current user
mover run a utility to move block replicas across storage types
namenode run the DFS namenode
nfs3 run an NFS version 3 gateway
oev apply the offline edits viewer to an edits file
oiv apply the offline fsimage viewer to an fsimage
oiv_legacy apply the offline fsimage viewer to a legacy fsimage
portmap run a portmap service
secondarynamenode run the DFS secondary namenode
snapshotDiff diff two snapshots of a directory or diff the current
directory contents with a snapshot
storagepolicies list/get/set block storage policies
version print the version
zkfc run the ZK Failover Controller daemon
Most subcommands print help when invoked w/o parameters or with -h.
{code}
mapred:
{code}
aw-mbp-work:hadoop-3.0.0-SNAPSHOT aw$ bin/mapred
Usage: mapred [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
OPTIONS is none or any of:
--config confdir
--daemon (start|stop|status)
--debug
--hostnames list[,of,host,names]
--hosts filename
--loglevel loglevel
--slaves
SUBCOMMAND is one of:
archive create a hadoop archive
classpath prints the class path needed for running mapreduce subcommands
distcp copy file or directories recursively
historyserver run job history servers as a standalone daemon
hsadmin job history server admin interface
job manipulate MapReduce jobs
pipes run a Pipes job
queue get information regarding JobQueues
sampler sampler
version print the version
Most subcommands print help when invoked w/o parameters or with -h.
{code}
yarn:
{code}
aw-mbp-work:hadoop-3.0.0-SNAPSHOT aw$ bin/yarn
Usage: yarn [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
or yarn [OPTIONS] CLASSNAME [CLASSNAME OPTIONS]
where CLASSNAME is a user-provided Java class
OPTIONS is none or any of:
--config confdir
--daemon (start|stop|status)
--debug
--hostnames list[,of,host,names]
--hosts filename
--loglevel loglevel
--slaves
SUBCOMMAND is one of:
application prints application(s) report/kill application
applicationattempt prints applicationattempt(s) report
classpath prints the class path needed to get the hadoop jar and
the required libraries
cluster prints cluster information
container prints container(s) report
daemonlog get/set the log level for each daemon
jar <jar> run a jar file
logs dump container logs
node prints node report(s)
nodemanager run a nodemanager on each slave
proxyserver run the web app proxy server
queue prints queue information
resourcemanager run the ResourceManager
rmadmin admin tools
scmadmin SharedCacheManager admin tools
sharedcachemanager run the SharedCacheManager daemon
timelineserver run the timeline server
top view cluster information
version print the version
Most subcommands print help when invoked w/o parameters or with -h.
{code}
> Auto-entries in hadoop_usage
> ----------------------------
>
> Key: HADOOP-10979
> URL: https://issues.apache.org/jira/browse/HADOOP-10979
> Project: Hadoop Common
> Issue Type: Improvement
> Components: scripts
> Reporter: Allen Wittenauer
> Priority: Minor
> Labels: scripts
> Attachments: HADOOP-10978.00.patch
>
>
> It would make adding common options to hadoop_usage output easier if some entries were auto-populated. This is similar to what happens in FsShell and other parts of the Java code.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)