You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by aw...@apache.org on 2015/02/10 22:40:10 UTC

[6/7] hadoop git commit: HADOOP-11495. Convert site documentation from apt to markdown (Masatake Iwasaki via aw)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9d26fe9/hadoop-common-project/hadoop-common/src/site/apt/DeprecatedProperties.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/apt/DeprecatedProperties.apt.vm b/hadoop-common-project/hadoop-common/src/site/apt/DeprecatedProperties.apt.vm
deleted file mode 100644
index c4f3b1e..0000000
--- a/hadoop-common-project/hadoop-common/src/site/apt/DeprecatedProperties.apt.vm
+++ /dev/null
@@ -1,552 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Hadoop ${project.version}
-  ---
-  ---
-  ${maven.build.timestamp}
-  
-Deprecated Properties
-
-  The following table lists the configuration property names that are
-  deprecated in this version of Hadoop, and their replacements.
-
-*-------------------------------+-----------------------+
-|| <<Deprecated property name>> || <<New property name>>|
-*-------------------------------+-----------------------+
-|create.empty.dir.if.nonexist | mapreduce.jobcontrol.createdir.ifnotexist
-*---+---+
-|dfs.access.time.precision | dfs.namenode.accesstime.precision
-*---+---+
-|dfs.backup.address | dfs.namenode.backup.address
-*---+---+
-|dfs.backup.http.address | dfs.namenode.backup.http-address
-*---+---+
-|dfs.balance.bandwidthPerSec | dfs.datanode.balance.bandwidthPerSec
-*---+---+
-|dfs.block.size | dfs.blocksize
-*---+---+
-|dfs.data.dir | dfs.datanode.data.dir
-*---+---+
-|dfs.datanode.max.xcievers | dfs.datanode.max.transfer.threads
-*---+---+
-|dfs.df.interval | fs.df.interval
-*---+---+
-|dfs.federation.nameservice.id | dfs.nameservice.id
-*---+---+
-|dfs.federation.nameservices | dfs.nameservices
-*---+---+
-|dfs.http.address | dfs.namenode.http-address
-*---+---+
-|dfs.https.address | dfs.namenode.https-address
-*---+---+
-|dfs.https.client.keystore.resource | dfs.client.https.keystore.resource
-*---+---+
-|dfs.https.need.client.auth | dfs.client.https.need-auth
-*---+---+
-|dfs.max.objects | dfs.namenode.max.objects
-*---+---+
-|dfs.max-repl-streams | dfs.namenode.replication.max-streams
-*---+---+
-|dfs.name.dir | dfs.namenode.name.dir
-*---+---+
-|dfs.name.dir.restore | dfs.namenode.name.dir.restore
-*---+---+
-|dfs.name.edits.dir | dfs.namenode.edits.dir
-*---+---+
-|dfs.permissions | dfs.permissions.enabled
-*---+---+
-|dfs.permissions.supergroup | dfs.permissions.superusergroup
-*---+---+
-|dfs.read.prefetch.size | dfs.client.read.prefetch.size
-*---+---+
-|dfs.replication.considerLoad | dfs.namenode.replication.considerLoad
-*---+---+
-|dfs.replication.interval | dfs.namenode.replication.interval
-*---+---+
-|dfs.replication.min | dfs.namenode.replication.min
-*---+---+
-|dfs.replication.pending.timeout.sec | dfs.namenode.replication.pending.timeout-sec
-*---+---+
-|dfs.safemode.extension | dfs.namenode.safemode.extension
-*---+---+
-|dfs.safemode.threshold.pct | dfs.namenode.safemode.threshold-pct
-*---+---+
-|dfs.secondary.http.address | dfs.namenode.secondary.http-address
-*---+---+
-|dfs.socket.timeout | dfs.client.socket-timeout
-*---+---+
-|dfs.umaskmode | fs.permissions.umask-mode
-*---+---+
-|dfs.write.packet.size | dfs.client-write-packet-size
-*---+---+
-|fs.checkpoint.dir | dfs.namenode.checkpoint.dir
-*---+---+
-|fs.checkpoint.edits.dir | dfs.namenode.checkpoint.edits.dir
-*---+---+
-|fs.checkpoint.period | dfs.namenode.checkpoint.period
-*---+---+
-|fs.default.name | fs.defaultFS
-*---+---+
-|hadoop.configured.node.mapping | net.topology.configured.node.mapping
-*---+---+
-|hadoop.job.history.location | mapreduce.jobtracker.jobhistory.location
-*---+---+
-|hadoop.native.lib | io.native.lib.available
-*---+---+
-|hadoop.net.static.resolutions | mapreduce.tasktracker.net.static.resolutions
-*---+---+
-|hadoop.pipes.command-file.keep | mapreduce.pipes.commandfile.preserve
-*---+---+
-|hadoop.pipes.executable.interpretor | mapreduce.pipes.executable.interpretor
-*---+---+
-|hadoop.pipes.executable | mapreduce.pipes.executable
-*---+---+
-|hadoop.pipes.java.mapper | mapreduce.pipes.isjavamapper
-*---+---+
-|hadoop.pipes.java.recordreader | mapreduce.pipes.isjavarecordreader
-*---+---+
-|hadoop.pipes.java.recordwriter | mapreduce.pipes.isjavarecordwriter
-*---+---+
-|hadoop.pipes.java.reducer | mapreduce.pipes.isjavareducer
-*---+---+
-|hadoop.pipes.partitioner | mapreduce.pipes.partitioner
-*---+---+
-|heartbeat.recheck.interval | dfs.namenode.heartbeat.recheck-interval
-*---+---+
-|io.bytes.per.checksum | dfs.bytes-per-checksum
-*---+---+
-|io.sort.factor | mapreduce.task.io.sort.factor
-*---+---+
-|io.sort.mb | mapreduce.task.io.sort.mb
-*---+---+
-|io.sort.spill.percent | mapreduce.map.sort.spill.percent
-*---+---+
-|jobclient.completion.poll.interval | mapreduce.client.completion.pollinterval
-*---+---+
-|jobclient.output.filter | mapreduce.client.output.filter
-*---+---+
-|jobclient.progress.monitor.poll.interval | mapreduce.client.progressmonitor.pollinterval
-*---+---+
-|job.end.notification.url | mapreduce.job.end-notification.url
-*---+---+
-|job.end.retry.attempts | mapreduce.job.end-notification.retry.attempts
-*---+---+
-|job.end.retry.interval | mapreduce.job.end-notification.retry.interval
-*---+---+
-|job.local.dir | mapreduce.job.local.dir
-*---+---+
-|keep.failed.task.files | mapreduce.task.files.preserve.failedtasks
-*---+---+
-|keep.task.files.pattern | mapreduce.task.files.preserve.filepattern
-*---+---+
-|key.value.separator.in.input.line | mapreduce.input.keyvaluelinerecordreader.key.value.separator
-*---+---+
-|local.cache.size | mapreduce.tasktracker.cache.local.size
-*---+---+
-|map.input.file | mapreduce.map.input.file
-*---+---+
-|map.input.length | mapreduce.map.input.length
-*---+---+
-|map.input.start | mapreduce.map.input.start
-*---+---+
-|map.output.key.field.separator | mapreduce.map.output.key.field.separator
-*---+---+
-|map.output.key.value.fields.spec | mapreduce.fieldsel.map.output.key.value.fields.spec
-*---+---+
-|mapred.acls.enabled | mapreduce.cluster.acls.enabled
-*---+---+
-|mapred.binary.partitioner.left.offset | mapreduce.partition.binarypartitioner.left.offset
-*---+---+
-|mapred.binary.partitioner.right.offset | mapreduce.partition.binarypartitioner.right.offset
-*---+---+
-|mapred.cache.archives | mapreduce.job.cache.archives
-*---+---+
-|mapred.cache.archives.timestamps | mapreduce.job.cache.archives.timestamps
-*---+---+
-|mapred.cache.files | mapreduce.job.cache.files
-*---+---+
-|mapred.cache.files.timestamps | mapreduce.job.cache.files.timestamps
-*---+---+
-|mapred.cache.localArchives | mapreduce.job.cache.local.archives
-*---+---+
-|mapred.cache.localFiles | mapreduce.job.cache.local.files
-*---+---+
-|mapred.child.tmp | mapreduce.task.tmp.dir
-*---+---+
-|mapred.cluster.average.blacklist.threshold | mapreduce.jobtracker.blacklist.average.threshold
-*---+---+
-|mapred.cluster.map.memory.mb | mapreduce.cluster.mapmemory.mb
-*---+---+
-|mapred.cluster.max.map.memory.mb | mapreduce.jobtracker.maxmapmemory.mb
-*---+---+
-|mapred.cluster.max.reduce.memory.mb | mapreduce.jobtracker.maxreducememory.mb
-*---+---+
-|mapred.cluster.reduce.memory.mb | mapreduce.cluster.reducememory.mb
-*---+---+
-|mapred.committer.job.setup.cleanup.needed | mapreduce.job.committer.setup.cleanup.needed
-*---+---+
-|mapred.compress.map.output | mapreduce.map.output.compress
-*---+---+
-|mapred.data.field.separator | mapreduce.fieldsel.data.field.separator
-*---+---+
-|mapred.debug.out.lines | mapreduce.task.debugout.lines
-*---+---+
-|mapred.healthChecker.interval | mapreduce.tasktracker.healthchecker.interval
-*---+---+
-|mapred.healthChecker.script.args | mapreduce.tasktracker.healthchecker.script.args
-*---+---+
-|mapred.healthChecker.script.path | mapreduce.tasktracker.healthchecker.script.path
-*---+---+
-|mapred.healthChecker.script.timeout | mapreduce.tasktracker.healthchecker.script.timeout
-*---+---+
-|mapred.heartbeats.in.second | mapreduce.jobtracker.heartbeats.in.second
-*---+---+
-|mapred.hosts.exclude | mapreduce.jobtracker.hosts.exclude.filename
-*---+---+
-|mapred.hosts | mapreduce.jobtracker.hosts.filename
-*---+---+
-|mapred.inmem.merge.threshold | mapreduce.reduce.merge.inmem.threshold
-*---+---+
-|mapred.input.dir.formats | mapreduce.input.multipleinputs.dir.formats
-*---+---+
-|mapred.input.dir.mappers | mapreduce.input.multipleinputs.dir.mappers
-*---+---+
-|mapred.input.dir | mapreduce.input.fileinputformat.inputdir
-*---+---+
-|mapred.input.pathFilter.class | mapreduce.input.pathFilter.class
-*---+---+
-|mapred.jar | mapreduce.job.jar
-*---+---+
-|mapred.job.classpath.archives | mapreduce.job.classpath.archives
-*---+---+
-|mapred.job.classpath.files | mapreduce.job.classpath.files
-*---+---+
-|mapred.job.id | mapreduce.job.id
-*---+---+
-|mapred.jobinit.threads | mapreduce.jobtracker.jobinit.threads
-*---+---+
-|mapred.job.map.memory.mb | mapreduce.map.memory.mb
-*---+---+
-|mapred.job.name | mapreduce.job.name
-*---+---+
-|mapred.job.priority | mapreduce.job.priority
-*---+---+
-|mapred.job.queue.name | mapreduce.job.queuename
-*---+---+
-|mapred.job.reduce.input.buffer.percent | mapreduce.reduce.input.buffer.percent
-*---+---+
-|mapred.job.reduce.markreset.buffer.percent | mapreduce.reduce.markreset.buffer.percent
-*---+---+
-|mapred.job.reduce.memory.mb | mapreduce.reduce.memory.mb
-*---+---+
-|mapred.job.reduce.total.mem.bytes | mapreduce.reduce.memory.totalbytes
-*---+---+
-|mapred.job.reuse.jvm.num.tasks | mapreduce.job.jvm.numtasks
-*---+---+
-|mapred.job.shuffle.input.buffer.percent | mapreduce.reduce.shuffle.input.buffer.percent
-*---+---+
-|mapred.job.shuffle.merge.percent | mapreduce.reduce.shuffle.merge.percent
-*---+---+
-|mapred.job.tracker.handler.count | mapreduce.jobtracker.handler.count
-*---+---+
-|mapred.job.tracker.history.completed.location | mapreduce.jobtracker.jobhistory.completed.location
-*---+---+
-|mapred.job.tracker.http.address | mapreduce.jobtracker.http.address
-*---+---+
-|mapred.jobtracker.instrumentation | mapreduce.jobtracker.instrumentation
-*---+---+
-|mapred.jobtracker.job.history.block.size | mapreduce.jobtracker.jobhistory.block.size
-*---+---+
-|mapred.job.tracker.jobhistory.lru.cache.size | mapreduce.jobtracker.jobhistory.lru.cache.size
-*---+---+
-|mapred.job.tracker | mapreduce.jobtracker.address
-*---+---+
-|mapred.jobtracker.maxtasks.per.job | mapreduce.jobtracker.maxtasks.perjob
-*---+---+
-|mapred.job.tracker.persist.jobstatus.active | mapreduce.jobtracker.persist.jobstatus.active
-*---+---+
-|mapred.job.tracker.persist.jobstatus.dir | mapreduce.jobtracker.persist.jobstatus.dir
-*---+---+
-|mapred.job.tracker.persist.jobstatus.hours | mapreduce.jobtracker.persist.jobstatus.hours
-*---+---+
-|mapred.jobtracker.restart.recover | mapreduce.jobtracker.restart.recover
-*---+---+
-|mapred.job.tracker.retiredjobs.cache.size | mapreduce.jobtracker.retiredjobs.cache.size
-*---+---+
-|mapred.job.tracker.retire.jobs | mapreduce.jobtracker.retirejobs
-*---+---+
-|mapred.jobtracker.taskalloc.capacitypad | mapreduce.jobtracker.taskscheduler.taskalloc.capacitypad
-*---+---+
-|mapred.jobtracker.taskScheduler | mapreduce.jobtracker.taskscheduler
-*---+---+
-|mapred.jobtracker.taskScheduler.maxRunningTasksPerJob | mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob
-*---+---+
-|mapred.join.expr | mapreduce.join.expr
-*---+---+
-|mapred.join.keycomparator | mapreduce.join.keycomparator
-*---+---+
-|mapred.lazy.output.format | mapreduce.output.lazyoutputformat.outputformat
-*---+---+
-|mapred.line.input.format.linespermap | mapreduce.input.lineinputformat.linespermap
-*---+---+
-|mapred.linerecordreader.maxlength | mapreduce.input.linerecordreader.line.maxlength
-*---+---+
-|mapred.local.dir | mapreduce.cluster.local.dir
-*---+---+
-|mapred.local.dir.minspacekill | mapreduce.tasktracker.local.dir.minspacekill
-*---+---+
-|mapred.local.dir.minspacestart | mapreduce.tasktracker.local.dir.minspacestart
-*---+---+
-|mapred.map.child.env | mapreduce.map.env
-*---+---+
-|mapred.map.child.java.opts | mapreduce.map.java.opts
-*---+---+
-|mapred.map.child.log.level | mapreduce.map.log.level
-*---+---+
-|mapred.map.max.attempts | mapreduce.map.maxattempts
-*---+---+
-|mapred.map.output.compression.codec | mapreduce.map.output.compress.codec
-*---+---+
-|mapred.mapoutput.key.class | mapreduce.map.output.key.class
-*---+---+
-|mapred.mapoutput.value.class | mapreduce.map.output.value.class
-*---+---+
-|mapred.mapper.regex.group | mapreduce.mapper.regexmapper..group
-*---+---+
-|mapred.mapper.regex | mapreduce.mapper.regex
-*---+---+
-|mapred.map.task.debug.script | mapreduce.map.debug.script
-*---+---+
-|mapred.map.tasks | mapreduce.job.maps
-*---+---+
-|mapred.map.tasks.speculative.execution | mapreduce.map.speculative
-*---+---+
-|mapred.max.map.failures.percent | mapreduce.map.failures.maxpercent
-*---+---+
-|mapred.max.reduce.failures.percent | mapreduce.reduce.failures.maxpercent
-*---+---+
-|mapred.max.split.size | mapreduce.input.fileinputformat.split.maxsize
-*---+---+
-|mapred.max.tracker.blacklists | mapreduce.jobtracker.tasktracker.maxblacklists
-*---+---+
-|mapred.max.tracker.failures | mapreduce.job.maxtaskfailures.per.tracker
-*---+---+
-|mapred.merge.recordsBeforeProgress | mapreduce.task.merge.progress.records
-*---+---+
-|mapred.min.split.size | mapreduce.input.fileinputformat.split.minsize
-*---+---+
-|mapred.min.split.size.per.node | mapreduce.input.fileinputformat.split.minsize.per.node
-*---+---+
-|mapred.min.split.size.per.rack | mapreduce.input.fileinputformat.split.minsize.per.rack
-*---+---+
-|mapred.output.compression.codec | mapreduce.output.fileoutputformat.compress.codec
-*---+---+
-|mapred.output.compression.type | mapreduce.output.fileoutputformat.compress.type
-*---+---+
-|mapred.output.compress | mapreduce.output.fileoutputformat.compress
-*---+---+
-|mapred.output.dir | mapreduce.output.fileoutputformat.outputdir
-*---+---+
-|mapred.output.key.class | mapreduce.job.output.key.class
-*---+---+
-|mapred.output.key.comparator.class | mapreduce.job.output.key.comparator.class
-*---+---+
-|mapred.output.value.class | mapreduce.job.output.value.class
-*---+---+
-|mapred.output.value.groupfn.class | mapreduce.job.output.group.comparator.class
-*---+---+
-|mapred.permissions.supergroup | mapreduce.cluster.permissions.supergroup
-*---+---+
-|mapred.pipes.user.inputformat | mapreduce.pipes.inputformat
-*---+---+
-|mapred.reduce.child.env | mapreduce.reduce.env
-*---+---+
-|mapred.reduce.child.java.opts | mapreduce.reduce.java.opts
-*---+---+
-|mapred.reduce.child.log.level | mapreduce.reduce.log.level
-*---+---+
-|mapred.reduce.max.attempts | mapreduce.reduce.maxattempts
-*---+---+
-|mapred.reduce.parallel.copies | mapreduce.reduce.shuffle.parallelcopies
-*---+---+
-|mapred.reduce.slowstart.completed.maps | mapreduce.job.reduce.slowstart.completedmaps
-*---+---+
-|mapred.reduce.task.debug.script | mapreduce.reduce.debug.script
-*---+---+
-|mapred.reduce.tasks | mapreduce.job.reduces
-*---+---+
-|mapred.reduce.tasks.speculative.execution | mapreduce.reduce.speculative
-*---+---+
-|mapred.seqbinary.output.key.class | mapreduce.output.seqbinaryoutputformat.key.class
-*---+---+
-|mapred.seqbinary.output.value.class | mapreduce.output.seqbinaryoutputformat.value.class
-*---+---+
-|mapred.shuffle.connect.timeout | mapreduce.reduce.shuffle.connect.timeout
-*---+---+
-|mapred.shuffle.read.timeout | mapreduce.reduce.shuffle.read.timeout
-*---+---+
-|mapred.skip.attempts.to.start.skipping | mapreduce.task.skip.start.attempts
-*---+---+
-|mapred.skip.map.auto.incr.proc.count | mapreduce.map.skip.proc-count.auto-incr
-*---+---+
-|mapred.skip.map.max.skip.records | mapreduce.map.skip.maxrecords
-*---+---+
-|mapred.skip.on | mapreduce.job.skiprecords
-*---+---+
-|mapred.skip.out.dir | mapreduce.job.skip.outdir
-*---+---+
-|mapred.skip.reduce.auto.incr.proc.count | mapreduce.reduce.skip.proc-count.auto-incr
-*---+---+
-|mapred.skip.reduce.max.skip.groups | mapreduce.reduce.skip.maxgroups
-*---+---+
-|mapred.speculative.execution.slowNodeThreshold | mapreduce.job.speculative.slownodethreshold
-*---+---+
-|mapred.speculative.execution.slowTaskThreshold | mapreduce.job.speculative.slowtaskthreshold
-*---+---+
-|mapred.speculative.execution.speculativeCap | mapreduce.job.speculative.speculativecap
-*---+---+
-|mapred.submit.replication | mapreduce.client.submit.file.replication
-*---+---+
-|mapred.system.dir | mapreduce.jobtracker.system.dir
-*---+---+
-|mapred.task.cache.levels | mapreduce.jobtracker.taskcache.levels
-*---+---+
-|mapred.task.id | mapreduce.task.attempt.id
-*---+---+
-|mapred.task.is.map | mapreduce.task.ismap
-*---+---+
-|mapred.task.partition | mapreduce.task.partition
-*---+---+
-|mapred.task.profile | mapreduce.task.profile
-*---+---+
-|mapred.task.profile.maps | mapreduce.task.profile.maps
-*---+---+
-|mapred.task.profile.params | mapreduce.task.profile.params
-*---+---+
-|mapred.task.profile.reduces | mapreduce.task.profile.reduces
-*---+---+
-|mapred.task.timeout | mapreduce.task.timeout
-*---+---+
-|mapred.tasktracker.dns.interface | mapreduce.tasktracker.dns.interface
-*---+---+
-|mapred.tasktracker.dns.nameserver | mapreduce.tasktracker.dns.nameserver
-*---+---+
-|mapred.tasktracker.events.batchsize | mapreduce.tasktracker.events.batchsize
-*---+---+
-|mapred.tasktracker.expiry.interval | mapreduce.jobtracker.expire.trackers.interval
-*---+---+
-|mapred.task.tracker.http.address | mapreduce.tasktracker.http.address
-*---+---+
-|mapred.tasktracker.indexcache.mb | mapreduce.tasktracker.indexcache.mb
-*---+---+
-|mapred.tasktracker.instrumentation | mapreduce.tasktracker.instrumentation
-*---+---+
-|mapred.tasktracker.map.tasks.maximum | mapreduce.tasktracker.map.tasks.maximum
-*---+---+
-|mapred.tasktracker.memory_calculator_plugin | mapreduce.tasktracker.resourcecalculatorplugin
-*---+---+
-|mapred.tasktracker.memorycalculatorplugin | mapreduce.tasktracker.resourcecalculatorplugin
-*---+---+
-|mapred.tasktracker.reduce.tasks.maximum | mapreduce.tasktracker.reduce.tasks.maximum
-*---+---+
-|mapred.task.tracker.report.address | mapreduce.tasktracker.report.address
-*---+---+
-|mapred.task.tracker.task-controller | mapreduce.tasktracker.taskcontroller
-*---+---+
-|mapred.tasktracker.taskmemorymanager.monitoring-interval | mapreduce.tasktracker.taskmemorymanager.monitoringinterval
-*---+---+
-|mapred.tasktracker.tasks.sleeptime-before-sigkill | mapreduce.tasktracker.tasks.sleeptimebeforesigkill
-*---+---+
-|mapred.temp.dir | mapreduce.cluster.temp.dir
-*---+---+
-|mapred.text.key.comparator.options | mapreduce.partition.keycomparator.options
-*---+---+
-|mapred.text.key.partitioner.options | mapreduce.partition.keypartitioner.options
-*---+---+
-|mapred.textoutputformat.separator | mapreduce.output.textoutputformat.separator
-*---+---+
-|mapred.tip.id | mapreduce.task.id
-*---+---+
-|mapreduce.combine.class | mapreduce.job.combine.class
-*---+---+
-|mapreduce.inputformat.class | mapreduce.job.inputformat.class
-*---+---+
-|mapreduce.job.counters.limit | mapreduce.job.counters.max
-*---+---+
-|mapreduce.jobtracker.permissions.supergroup | mapreduce.cluster.permissions.supergroup
-*---+---+
-|mapreduce.map.class | mapreduce.job.map.class
-*---+---+
-|mapreduce.outputformat.class | mapreduce.job.outputformat.class
-*---+---+
-|mapreduce.partitioner.class | mapreduce.job.partitioner.class
-*---+---+
-|mapreduce.reduce.class | mapreduce.job.reduce.class
-*---+---+
-|mapred.used.genericoptionsparser | mapreduce.client.genericoptionsparser.used
-*---+---+
-|mapred.userlog.limit.kb | mapreduce.task.userlog.limit.kb
-*---+---+
-|mapred.userlog.retain.hours | mapreduce.job.userlog.retain.hours
-*---+---+
-|mapred.working.dir | mapreduce.job.working.dir
-*---+---+
-|mapred.work.output.dir | mapreduce.task.output.dir
-*---+---+
-|min.num.spills.for.combine | mapreduce.map.combine.minspills
-*---+---+
-|reduce.output.key.value.fields.spec | mapreduce.fieldsel.reduce.output.key.value.fields.spec
-*---+---+
-|security.job.submission.protocol.acl | security.job.client.protocol.acl
-*---+---+
-|security.task.umbilical.protocol.acl | security.job.task.protocol.acl
-*---+---+
-|sequencefile.filter.class | mapreduce.input.sequencefileinputfilter.class
-*---+---+
-|sequencefile.filter.frequency | mapreduce.input.sequencefileinputfilter.frequency
-*---+---+
-|sequencefile.filter.regex | mapreduce.input.sequencefileinputfilter.regex
-*---+---+
-|session.id | dfs.metrics.session-id
-*---+---+
-|slave.host.name | dfs.datanode.hostname
-*---+---+
-|slave.host.name | mapreduce.tasktracker.host.name
-*---+---+
-|tasktracker.contention.tracking | mapreduce.tasktracker.contention.tracking
-*---+---+
-|tasktracker.http.threads | mapreduce.tasktracker.http.threads
-*---+---+
-|topology.node.switch.mapping.impl | net.topology.node.switch.mapping.impl
-*---+---+
-|topology.script.file.name | net.topology.script.file.name
-*---+---+
-|topology.script.number.args | net.topology.script.number.args
-*---+---+
-|user.name | mapreduce.job.user.name
-*---+---+
-|webinterface.private.actions | mapreduce.jobtracker.webinterface.trusted
-*---+---+
-|yarn.app.mapreduce.yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts | yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts
-*---+---+
-
-  The following table lists additional changes to some configuration properties:
-
-*-------------------------------+-----------------------+
-|| <<Deprecated property name>> || <<New property name>>|
-*-------------------------------+-----------------------+
-|mapred.create.symlink | NONE - symlinking is always on
-*---+---+
-|mapreduce.job.cache.symlink.create | NONE - symlinking is always on
-*---+---+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9d26fe9/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm b/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
deleted file mode 100644
index 6831ebf..0000000
--- a/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
+++ /dev/null
@@ -1,764 +0,0 @@
-~~ Licensed to the Apache Software Foundation (ASF) under one or more
-~~ contributor license agreements.  See the NOTICE file distributed with
-~~ this work for additional information regarding copyright ownership.
-~~ The ASF licenses this file to You under the Apache License, Version 2.0
-~~ (the "License"); you may not use this file except in compliance with
-~~ the License.  You may obtain a copy of the License at
-~~
-~~     http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License.
-
-  ---
-  File System Shell Guide
-  ---
-  ---
-  ${maven.build.timestamp}
-
-%{toc}
-
-Overview
-
-   The File System (FS) shell includes various shell-like commands that
-   directly interact with the Hadoop Distributed File System (HDFS) as well as
-   other file systems that Hadoop supports, such as Local FS, HFTP FS, S3 FS,
-   and others. The FS shell is invoked by:
-
-+---
-bin/hadoop fs <args>
-+---
-
-   All FS shell commands take path URIs as arguments. The URI format is
-   <<<scheme://authority/path>>>. For HDFS the scheme is <<<hdfs>>>, and for
-   the Local FS the scheme is <<<file>>>. The scheme and authority are
-   optional. If not specified, the default scheme specified in the
-   configuration is used. An HDFS file or directory such as /parent/child can
-   be specified as <<<hdfs://namenodehost/parent/child>>> or simply as
-   <<</parent/child>>> (given that your configuration is set to point to
-   <<<hdfs://namenodehost>>>).
-
-   Most of the commands in FS shell behave like corresponding Unix commands.
-   Differences are described with each of the commands. Error information is
-   sent to stderr and the output is sent to stdout.
-
-   If HDFS is being used, <<<hdfs dfs>>> is a synonym.
-
-   See the {{{./CommandsManual.html}Commands Manual}} for generic shell options.
-
-* appendToFile
-
-      Usage: <<<hadoop fs -appendToFile <localsrc> ... <dst> >>>
-
-      Append single src, or multiple srcs from local file system to the
-      destination file system. Also reads input from stdin and appends to
-      destination file system.
-
-        * <<<hadoop fs -appendToFile localfile /user/hadoop/hadoopfile>>>
-
-        * <<<hadoop fs -appendToFile localfile1 localfile2 /user/hadoop/hadoopfile>>>
-
-        * <<<hadoop fs -appendToFile localfile hdfs://nn.example.com/hadoop/hadoopfile>>>
-
-        * <<<hadoop fs -appendToFile - hdfs://nn.example.com/hadoop/hadoopfile>>>
-          Reads the input from stdin.
-
-      Exit Code:
-
-      Returns 0 on success and 1 on error.
-
-* cat
-
-   Usage: <<<hadoop fs -cat URI [URI ...]>>>
-
-   Copies source paths to stdout.
-
-   Example:
-
-     * <<<hadoop fs -cat hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2>>>
-
-     * <<<hadoop fs -cat file:///file3 /user/hadoop/file4>>>
-
-   Exit Code:
-
-   Returns 0 on success and -1 on error.
-
-* checksum
-
-  Usage: <<<hadoop fs -checksum URI>>>
-
-  Returns the checksum information of a file.
-
-  Example:
-
-    * <<<hadoop fs -checksum hdfs://nn1.example.com/file1>>>
-
-    * <<<hadoop fs -checksum file:///etc/hosts>>>
-
-* chgrp
-
-   Usage: <<<hadoop fs -chgrp [-R] GROUP URI [URI ...]>>>
-
-   Change group association of files. The user must be the owner of files, or
-   else a super-user. Additional information is in the
-   {{{../hadoop-hdfs/HdfsPermissionsGuide.html}Permissions Guide}}.
-
-   Options
-
-     * The -R option will make the change recursively through the directory structure.
-
-* chmod
-
-   Usage: <<<hadoop fs -chmod [-R] <MODE[,MODE]... | OCTALMODE> URI [URI ...]>>>
-
-   Change the permissions of files. With -R, make the change recursively
-   through the directory structure. The user must be the owner of the file, or
-   else a super-user. Additional information is in the
-   {{{../hadoop-hdfs/HdfsPermissionsGuide.html}Permissions Guide}}.
-
-   Options
-
-     * The -R option will make the change recursively through the directory structure.
-
-* chown
-
-   Usage: <<<hadoop fs -chown [-R] [OWNER][:[GROUP]] URI [URI ]>>>
-
-   Change the owner of files. The user must be a super-user. Additional information
-   is in the {{{../hadoop-hdfs/HdfsPermissionsGuide.html}Permissions Guide}}.
-
-   Options
-
-     * The -R option will make the change recursively through the directory structure.
-
-* copyFromLocal
-
-   Usage: <<<hadoop fs -copyFromLocal <localsrc> URI>>>
-
-   Similar to put command, except that the source is restricted to a local
-   file reference.
-
-   Options:
-
-     * The -f option will overwrite the destination if it already exists.
-
-* copyToLocal
-
-   Usage: <<<hadoop fs -copyToLocal [-ignorecrc] [-crc] URI <localdst> >>>
-
-   Similar to get command, except that the destination is restricted to a
-   local file reference.
-
-* count
-
-   Usage: <<<hadoop fs -count [-q] [-h] [-v] <paths> >>>
-
-   Count the number of directories, files and bytes under the paths that match
-   the specified file pattern.  The output columns with -count are: DIR_COUNT,
-   FILE_COUNT, CONTENT_SIZE PATHNAME
-
-   The output columns with -count -q are: QUOTA, REMAINING_QUATA, SPACE_QUOTA,
-   REMAINING_SPACE_QUOTA, DIR_COUNT, FILE_COUNT, CONTENT_SIZE, PATHNAME
-
-   The -h option shows sizes in human readable format.
-
-   The -v option displays a header line.
-
-   Example:
-
-     * <<<hadoop fs -count hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2>>>
-
-     * <<<hadoop fs -count -q hdfs://nn1.example.com/file1>>>
-
-     * <<<hadoop fs -count -q -h hdfs://nn1.example.com/file1>>>
-
-     * <<<hdfs dfs -count -q -h -v hdfs://nn1.example.com/file1>>>
-
-   Exit Code:
-
-   Returns 0 on success and -1 on error.
-
-* cp
-
-   Usage: <<<hadoop fs -cp [-f] [-p | -p[topax]] URI [URI ...] <dest> >>>
-
-   Copy files from source to destination. This command allows multiple sources
-   as well in which case the destination must be a directory.
-
-   'raw.*' namespace extended attributes are preserved if (1) the source and
-   destination filesystems support them (HDFS only), and (2) all source and
-   destination pathnames are in the /.reserved/raw hierarchy. Determination of
-   whether raw.* namespace xattrs are preserved is independent of the
-   -p (preserve) flag.
-
-    Options:
-
-      * The -f option will overwrite the destination if it already exists.
-
-      * The -p option will preserve file attributes [topx] (timestamps,
-        ownership, permission, ACL, XAttr). If -p is specified with no <arg>,
-        then preserves timestamps, ownership, permission. If -pa is specified,
-        then preserves permission also because ACL is a super-set of
-        permission. Determination of whether raw namespace extended attributes
-        are preserved is independent of the -p flag.
-
-   Example:
-
-     * <<<hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2>>>
-
-     * <<<hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir>>>
-
-   Exit Code:
-
-   Returns 0 on success and -1 on error.
-
-* createSnapshot
-
-  See {{{../hadoop-hdfs/HdfsSnapshots.html}HDFS Snapshots Guide}}.
-
-
-* deleteSnapshot
-
-  See {{{../hadoop-hdfs/HdfsSnapshots.html}HDFS Snapshots Guide}}.
-
-* df
-
-   Usage: <<<hadoop fs -df [-h] URI [URI ...]>>>
-
-   Displays free space.
-
-   Options:
-
-     * The -h option will format file sizes in a "human-readable" fashion (e.g
-       64.0m instead of 67108864)
-
-   Example:
-
-     * <<<hadoop dfs -df /user/hadoop/dir1>>>
-
-* du
-
-   Usage: <<<hadoop fs -du [-s] [-h] URI [URI ...]>>>
-
-   Displays sizes of files and directories contained in the given directory or
-   the length of a file in case its just a file.
-
-   Options:
-
-     * The -s option will result in an aggregate summary of file lengths being
-       displayed, rather than the individual files.
-
-     * The -h option will format file sizes in a "human-readable" fashion (e.g
-       64.0m instead of 67108864)
-
-   Example:
-
-    * <<<hadoop fs -du /user/hadoop/dir1 /user/hadoop/file1 hdfs://nn.example.com/user/hadoop/dir1>>>
-
-   Exit Code:
-   Returns 0 on success and -1 on error.
-
-* dus
-
-   Usage: <<<hadoop fs -dus <args> >>>
-
-   Displays a summary of file lengths.
-
-   <<Note:>> This command is deprecated. Instead use <<<hadoop fs -du -s>>>.
-
-* expunge
-
-   Usage: <<<hadoop fs -expunge>>>
-
-   Empty the Trash. Refer to the {{{../hadoop-hdfs/HdfsDesign.html}
-   HDFS Architecture Guide}} for more information on the Trash feature.
-
-* find
-
-   Usage: <<<hadoop fs -find <path> ... <expression> ... >>>
-
-   Finds all files that match the specified expression and applies selected
-   actions to them. If no <path> is specified then defaults to the current
-   working directory. If no expression is specified then defaults to -print.
-
-   The following primary expressions are recognised:
-
-     * -name pattern \
-       -iname pattern
-
-       Evaluates as true if the basename of the file matches the pattern using
-       standard file system globbing. If -iname is used then the match is case
-       insensitive.
-
-     * -print \
-       -print0
-
-       Always evaluates to true. Causes the current pathname to be written to
-       standard output. If the -print0 expression is used then an ASCII NULL
-       character is appended.
-
-   The following operators are recognised:
-
-     * expression -a expression \
-       expression -and expression \
-       expression expression
-
-       Logical AND operator for joining two expressions. Returns true if both
-       child expressions return true. Implied by the juxtaposition of two
-       expressions and so does not need to be explicitly specified. The second
-       expression will not be applied if the first fails.
-
-   Example:
-
-   <<<hadoop fs -find / -name test -print>>>
-
-   Exit Code:
-
-     Returns 0 on success and -1 on error.
-
-* get
-
-   Usage: <<<hadoop fs -get [-ignorecrc] [-crc] <src> <localdst> >>>
-
-   Copy files to the local file system. Files that fail the CRC check may be
-   copied with the -ignorecrc option. Files and CRCs may be copied using the
-   -crc option.
-
-   Example:
-
-     * <<<hadoop fs -get /user/hadoop/file localfile>>>
-
-     * <<<hadoop fs -get hdfs://nn.example.com/user/hadoop/file localfile>>>
-
-   Exit Code:
-
-   Returns 0 on success and -1 on error.
-
-* getfacl
-
-   Usage: <<<hadoop fs -getfacl [-R] <path> >>>
-
-   Displays the Access Control Lists (ACLs) of files and directories. If a
-   directory has a default ACL, then getfacl also displays the default ACL.
-
-   Options:
-
-     * -R: List the ACLs of all files and directories recursively.
-
-     * <path>: File or directory to list.
-
-   Examples:
-
-     * <<<hadoop fs -getfacl /file>>>
-
-     * <<<hadoop fs -getfacl -R /dir>>>
-
-   Exit Code:
-
-   Returns 0 on success and non-zero on error.
-
-* getfattr
-
-   Usage: <<<hadoop fs -getfattr [-R] {-n name | -d} [-e en] <path> >>>
-
-   Displays the extended attribute names and values (if any) for a file or
-   directory.
-
-   Options:
-
-     * -R: Recursively list the attributes for all files and directories.
-
-     * -n name: Dump the named extended attribute value.
-
-     * -d: Dump all extended attribute values associated with pathname.
-
-     * -e <encoding>: Encode values after retrieving them. Valid encodings are "text", "hex", and "base64". Values encoded as text strings are enclosed in double quotes ("), and values encoded as hexadecimal and base64 are prefixed with 0x and 0s, respectively.
-
-     * <path>: The file or directory.
-
-   Examples:
-
-     * <<<hadoop fs -getfattr -d /file>>>
-
-     * <<<hadoop fs -getfattr -R -n user.myAttr /dir>>>
-
-   Exit Code:
-
-   Returns 0 on success and non-zero on error.
-
-* getmerge
-
-   Usage: <<<hadoop fs -getmerge <src> <localdst> [addnl]>>>
-
-   Takes a source directory and a destination file as input and concatenates
-   files in src into the destination local file. Optionally addnl can be set to
-   enable adding a newline character at the
-   end of each file.
-
-* help
-
-   Usage: <<<hadoop fs -help>>>
-
-   Return usage output.
-
-* ls
-
-   Usage: <<<hadoop fs -ls [-d] [-h] [-R] [-t] [-S] [-r] [-u] <args> >>>
-
-   Options:
-
-     * -d: Directories are listed as plain files.
-
-     * -h: Format file sizes in a human-readable fashion (eg 64.0m instead of 67108864).
-
-     * -R: Recursively list subdirectories encountered.
-
-     * -t: Sort output by modification time (most recent first).
-
-     * -S: Sort output by file size.
-
-     * -r: Reverse the sort order.
-
-     * -u: Use access time rather than modification time for display and sorting.
-
-   For a file ls returns stat on the file with the following format:
-
-+---+
-permissions number_of_replicas userid groupid filesize modification_date modification_time filename
-+---+
-
-   For a directory it returns list of its direct children as in Unix. A directory is listed as:
-
-+---+
-permissions userid groupid modification_date modification_time dirname
-+---+
-
-   Files within a directory are order by filename by default.
-
-
-   Example:
-
-     * <<<hadoop fs -ls /user/hadoop/file1>>>
-
-   Exit Code:
-
-   Returns 0 on success and -1 on error.
-
-* lsr
-
-   Usage: <<<hadoop fs -lsr <args> >>>
-
-   Recursive version of ls.
-
-   <<Note:>> This command is deprecated. Instead use <<<hadoop fs -ls -R>>>
-
-* mkdir
-
-   Usage: <<<hadoop fs -mkdir [-p] <paths> >>>
-
-   Takes path uri's as argument and creates directories.
-
-   Options:
-
-     * The -p option behavior is much like Unix mkdir -p, creating parent directories along the path.
-
-   Example:
-
-     * <<<hadoop fs -mkdir /user/hadoop/dir1 /user/hadoop/dir2>>>
-
-     * <<<hadoop fs -mkdir hdfs://nn1.example.com/user/hadoop/dir hdfs://nn2.example.com/user/hadoop/dir>>>
-
-   Exit Code:
-
-   Returns 0 on success and -1 on error.
-
-* moveFromLocal
-
-   Usage: <<<hadoop fs -moveFromLocal <localsrc> <dst> >>>
-
-   Similar to put command, except that the source localsrc is deleted after
-   it's copied.
-
-* moveToLocal
-
-   Usage: <<<hadoop fs -moveToLocal [-crc] <src> <dst> >>>
-
-   Displays a "Not implemented yet" message.
-
-* mv
-
-   Usage: <<<hadoop fs -mv URI [URI ...] <dest> >>>
-
-   Moves files from source to destination. This command allows multiple sources
-   as well in which case the destination needs to be a directory. Moving files
-   across file systems is not permitted.
-
-   Example:
-
-     * <<<hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2>>>
-
-     * <<<hadoop fs -mv hdfs://nn.example.com/file1 hdfs://nn.example.com/file2 hdfs://nn.example.com/file3 hdfs://nn.example.com/dir1>>>
-
-   Exit Code:
-
-   Returns 0 on success and -1 on error.
-
-* put
-
-   Usage: <<<hadoop fs -put <localsrc> ... <dst> >>>
-
-   Copy single src, or multiple srcs from local file system to the destination
-   file system. Also reads input from stdin and writes to destination file
-   system.
-
-     * <<<hadoop fs -put localfile /user/hadoop/hadoopfile>>>
-
-     * <<<hadoop fs -put localfile1 localfile2 /user/hadoop/hadoopdir>>>
-
-     * <<<hadoop fs -put localfile hdfs://nn.example.com/hadoop/hadoopfile>>>
-
-     * <<<hadoop fs -put - hdfs://nn.example.com/hadoop/hadoopfile>>>
-       Reads the input from stdin.
-
-   Exit Code:
-
-   Returns 0 on success and -1 on error.
-
-* renameSnapshot
-
-  See {{{../hadoop-hdfs/HdfsSnapshots.html}HDFS Snapshots Guide}}.
-
-* rm
-
-   Usage: <<<hadoop fs -rm [-f] [-r|-R] [-skipTrash] URI [URI ...]>>>
-
-   Delete files specified as args.
-
-   Options:
-
-    * The -f option will not display a diagnostic message or modify the exit
-      status to reflect an error if the file does not exist.
-
-    * The -R option deletes the directory and any content under it recursively.
-
-    * The -r option is equivalent to -R.
-
-    * The -skipTrash option will bypass trash, if enabled, and delete the
-      specified file(s) immediately. This can be useful when it is necessary
-      to delete files from an over-quota directory.
-
-   Example:
-
-     * <<<hadoop fs -rm hdfs://nn.example.com/file /user/hadoop/emptydir>>>
-
-   Exit Code:
-
-   Returns 0 on success and -1 on error.
-
-* rmdir
-
-   Usage: <<<hadoop fs -rmdir [--ignore-fail-on-non-empty] URI [URI ...]>>>
-
-   Delete a directory.
-
-   Options:
-
-     * --ignore-fail-on-non-empty: When using wildcards, do not fail if a directory still contains files.
-
-   Example:
-
-     * <<<hadoop fs -rmdir /user/hadoop/emptydir>>>
-
-* rmr
-
-   Usage: <<<hadoop fs -rmr [-skipTrash] URI [URI ...]>>>
-
-   Recursive version of delete.
-
-   <<Note:>> This command is deprecated. Instead use <<<hadoop fs -rm -r>>>
-
-* setfacl
-
-   Usage: <<<hadoop fs -setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>] >>>
-
-   Sets Access Control Lists (ACLs) of files and directories.
-
-   Options:
-
-     * -b: Remove all but the base ACL entries. The entries for user, group and
-       others are retained for compatibility with permission bits.
-
-     * -k: Remove the default ACL.
-
-     * -R: Apply operations to all files and directories recursively.
-
-     * -m: Modify ACL. New entries are added to the ACL, and existing entries
-       are retained.
-
-     * -x: Remove specified ACL entries. Other ACL entries are retained.
-
-     * --set: Fully replace the ACL, discarding all existing entries. The
-       <acl_spec> must include entries for user, group, and others for
-       compatibility with permission bits.
-
-     * <acl_spec>: Comma separated list of ACL entries.
-
-     * <path>: File or directory to modify.
-
-   Examples:
-
-      * <<<hadoop fs -setfacl -m user:hadoop:rw- /file>>>
-
-      * <<<hadoop fs -setfacl -x user:hadoop /file>>>
-
-      * <<<hadoop fs -setfacl -b /file>>>
-
-      * <<<hadoop fs -setfacl -k /dir>>>
-
-      * <<<hadoop fs -setfacl --set user::rw-,user:hadoop:rw-,group::r--,other::r-- /file>>>
-
-      * <<<hadoop fs -setfacl -R -m user:hadoop:r-x /dir>>>
-
-      * <<<hadoop fs -setfacl -m default:user:hadoop:r-x /dir>>>
-
-   Exit Code:
-
-   Returns 0 on success and non-zero on error.
-
-* setfattr
-
-   Usage: <<<hadoop fs -setfattr {-n name [-v value] | -x name} <path> >>>
-
-   Sets an extended attribute name and value for a file or directory.
-
-   Options:
-
-     * -b: Remove all but the base ACL entries. The entries for user, group and others are retained for compatibility with permission bits.
-
-     * -n name: The extended attribute name.
-
-     * -v value: The extended attribute value. There are three different encoding methods for the value. If the argument is enclosed in double quotes, then the value is the string inside the quotes. If the argument is prefixed with 0x or 0X, then it is taken as a hexadecimal number. If the argument begins with 0s or 0S, then it is taken as a base64 encoding.
-
-     * -x name: Remove the extended attribute.
-
-     * <path>: The file or directory.
-
-   Examples:
-
-      * <<<hadoop fs -setfattr -n user.myAttr -v myValue /file>>>
-
-      * <<<hadoop fs -setfattr -n user.noValue /file>>>
-
-      * <<<hadoop fs -setfattr -x user.myAttr /file>>>
-
-   Exit Code:
-
-   Returns 0 on success and non-zero on error.
-
-* setrep
-
-   Usage: <<<hadoop fs -setrep [-R] [-w] <numReplicas> <path> >>>
-
-   Changes the replication factor of a file. If <path> is a directory then
-   the command recursively changes the replication factor of all files under
-   the directory tree rooted at <path>.
-
-   Options:
-
-     * The -w flag requests that the command wait for the replication
-       to complete. This can potentially take a very long time.
-
-     * The -R flag is accepted for backwards compatibility. It has no effect.
-
-   Example:
-
-     * <<<hadoop fs -setrep -w 3 /user/hadoop/dir1>>>
-
-   Exit Code:
-
-   Returns 0 on success and -1 on error.
-
-* stat
-
-   Usage: <<<hadoop fs -stat [format] \<path\> ...>>>
-
-   Print statistics about the file/directory at \<path\> in the specified
-   format. Format accepts filesize in blocks (%b), type (%F), group name of
-   owner (%g), name (%n), block size (%o), replication (%r), user name of
-   owner(%u), and modification date (%y, %Y). %y shows UTC date as
-   "yyyy-MM-dd HH:mm:ss" and %Y shows milliseconds since January 1, 1970 UTC.
-   If the format is not specified, %y is used by default.
-
-   Example:
-
-     * <<<hadoop fs -stat "%F %u:%g %b %y %n" /file>>>
-
-   Exit Code:
-   Returns 0 on success and -1 on error.
-
-* tail
-
-   Usage: <<<hadoop fs -tail [-f] URI>>>
-
-   Displays last kilobyte of the file to stdout.
-
-   Options:
-
-     * The -f option will output appended data as the file grows, as in Unix.
-
-   Example:
-
-     * <<<hadoop fs -tail pathname>>>
-
-   Exit Code:
-   Returns 0 on success and -1 on error.
-
-* test
-
-   Usage: <<<hadoop fs -test -[defsz] URI>>>
-
-   Options:
-
-     * -d: f the path is a directory, return 0.
-
-     * -e: if the path exists, return 0.
-
-     * -f: if the path is a file, return 0.
-
-     * -s: if the path is not empty, return 0.
-
-     * -z: if the file is zero length, return 0.
-
-   Example:
-
-     * <<<hadoop fs -test -e filename>>>
-
-* text
-
-   Usage: <<<hadoop fs -text <src> >>>
-
-   Takes a source file and outputs the file in text format. The allowed formats
-   are zip and TextRecordInputStream.
-
-* touchz
-
-   Usage: <<<hadoop fs -touchz URI [URI ...]>>>
-
-   Create a file of zero length.
-
-   Example:
-
-     * <<<hadoop fs -touchz pathname>>>
-
-   Exit Code:
-   Returns 0 on success and -1 on error.
-
-
-* usage
-
-   Usage: <<<hadoop fs -usage command>>>
-
-   Return the help for an individual command.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9d26fe9/hadoop-common-project/hadoop-common/src/site/apt/HttpAuthentication.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/apt/HttpAuthentication.apt.vm b/hadoop-common-project/hadoop-common/src/site/apt/HttpAuthentication.apt.vm
deleted file mode 100644
index 1f95da0..0000000
--- a/hadoop-common-project/hadoop-common/src/site/apt/HttpAuthentication.apt.vm
+++ /dev/null
@@ -1,98 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Authentication for Hadoop HTTP web-consoles
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Authentication for Hadoop HTTP web-consoles
-
-%{toc|section=1|fromDepth=0}
-
-* Introduction
-
-   This document describes how to configure Hadoop HTTP web-consoles to
-   require user authentication.
-
-   By default Hadoop HTTP web-consoles (JobTracker, NameNode, TaskTrackers
-   and DataNodes) allow access without any form of authentication.
-
-   Similarly to Hadoop RPC, Hadoop HTTP web-consoles can be configured to
-   require Kerberos authentication using HTTP SPNEGO protocol (supported
-   by browsers like Firefox and Internet Explorer).
-
-   In addition, Hadoop HTTP web-consoles support the equivalent of
-   Hadoop's Pseudo/Simple authentication. If this option is enabled, user
-   must specify their user name in the first browser interaction using the
-   user.name query string parameter. For example:
-   <<<http://localhost:50030/jobtracker.jsp?user.name=babu>>>.
-
-   If a custom authentication mechanism is required for the HTTP
-   web-consoles, it is possible to implement a plugin to support the
-   alternate authentication mechanism (refer to Hadoop hadoop-auth for details
-   on writing an <<<AuthenticatorHandler>>>).
-
-   The next section describes how to configure Hadoop HTTP web-consoles to
-   require user authentication.
-
-* Configuration
-
-   The following properties should be in the <<<core-site.xml>>> of all the
-   nodes in the cluster.
-
-   <<<hadoop.http.filter.initializers>>>: add to this property the
-   <<<org.apache.hadoop.security.AuthenticationFilterInitializer>>> initializer
-   class.
-
-   <<<hadoop.http.authentication.type>>>: Defines authentication used for the
-   HTTP web-consoles. The supported values are: <<<simple>>> | <<<kerberos>>> |
-   <<<#AUTHENTICATION_HANDLER_CLASSNAME#>>>. The dfeault value is <<<simple>>>.
-
-   <<<hadoop.http.authentication.token.validity>>>: Indicates how long (in
-   seconds) an authentication token is valid before it has to be renewed.
-   The default value is <<<36000>>>.
-
-   <<<hadoop.http.authentication.signature.secret.file>>>: The signature secret
-   file for signing the authentication tokens. The same secret should be used 
-   for all nodes in the cluster, JobTracker, NameNode, DataNode and TastTracker. 
-   The default value is <<<${user.home}/hadoop-http-auth-signature-secret>>>.
-   IMPORTANT: This file should be readable only by the Unix user running the
-   daemons.
-
-   <<<hadoop.http.authentication.cookie.domain>>>: The domain to use for the
-   HTTP cookie that stores the authentication token. In order to
-   authentiation to work correctly across all nodes in the cluster the
-   domain must be correctly set. There is no default value, the HTTP
-   cookie will not have a domain working only with the hostname issuing
-   the HTTP cookie.
-
-   IMPORTANT: when using IP addresses, browsers ignore cookies with domain
-   settings. For this setting to work properly all nodes in the cluster
-   must be configured to generate URLs with <<<hostname.domain>>> names on it.
-
-   <<<hadoop.http.authentication.simple.anonymous.allowed>>>: Indicates if
-   anonymous requests are allowed when using 'simple' authentication. The
-   default value is <<<true>>>
-
-   <<<hadoop.http.authentication.kerberos.principal>>>: Indicates the Kerberos
-   principal to be used for HTTP endpoint when using 'kerberos'
-   authentication. The principal short name must be <<<HTTP>>> per Kerberos HTTP
-   SPNEGO specification. The default value is <<<HTTP/_HOST@$LOCALHOST>>>,
-   where <<<_HOST>>> -if present- is replaced with bind address of the HTTP
-   server.
-
-   <<<hadoop.http.authentication.kerberos.keytab>>>: Location of the keytab file
-   with the credentials for the Kerberos principal used for the HTTP
-   endpoint. The default value is <<<${user.home}/hadoop.keytab>>>.i
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9d26fe9/hadoop-common-project/hadoop-common/src/site/apt/InterfaceClassification.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/apt/InterfaceClassification.apt.vm b/hadoop-common-project/hadoop-common/src/site/apt/InterfaceClassification.apt.vm
deleted file mode 100644
index 85e66bd..0000000
--- a/hadoop-common-project/hadoop-common/src/site/apt/InterfaceClassification.apt.vm
+++ /dev/null
@@ -1,239 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Hadoop Interface Taxonomy: Audience and Stability Classification
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Hadoop Interface Taxonomy: Audience and Stability Classification
-
-%{toc|section=1|fromDepth=0}
-
-* Motivation
-
-  The interface taxonomy classification provided here is for guidance to
-  developers and users of interfaces. The classification guides a developer
-  to declare the targeted audience or users of an interface and also its
-  stability.
-
-  * Benefits to the user of an interface: Knows which interfaces to use or not
-    use and their stability.
-
-  * Benefits to the developer: to prevent accidental changes of interfaces and
-    hence accidental impact on users or other components or system. This is
-    particularly useful in large systems with many developers who may not all
-    have a shared state/history of the project.
-
-* Interface Classification
-
-  Hadoop adopts the following interface classification,
-  this classification was derived from the
-  {{{http://www.opensolaris.org/os/community/arc/policies/interface-taxonomy/#Advice}OpenSolaris taxonomy}}
-  and, to some extent, from taxonomy used inside Yahoo. Interfaces have two main
-  attributes: Audience and Stability
-
-** Audience
-
-   Audience denotes the potential consumers of the interface. While many
-   interfaces are internal/private to the implementation,
-   other are public/external interfaces are meant for wider consumption by
-   applications and/or clients. For example, in posix, libc is an external or
-   public interface, while large parts of the kernel are internal or private
-   interfaces. Also, some interfaces are targeted towards other specific
-   subsystems.
-
-   Identifying the audience of an interface helps define the impact of
-   breaking it. For instance, it might be okay to break the compatibility of
-   an interface whose audience is a small number of specific subsystems. On
-   the other hand, it is probably not okay to break a protocol interfaces
-   that millions of Internet users depend on.
-
-   Hadoop uses the following kinds of audience in order of
-   increasing/wider visibility:
-
-   * Private:
-
-     * The interface is for internal use within the project (such as HDFS or
-     MapReduce) and should not be used by applications or by other projects. It
-     is subject to change at anytime without notice. Most interfaces of a
-     project are Private (also referred to as project-private).
-
-   * Limited-Private:
-
-     * The interface is used by a specified set of projects or systems
-     (typically closely related projects). Other projects or systems should not
-     use the interface. Changes to the interface will be communicated/
-     negotiated with the specified projects. For example, in the Hadoop project,
-     some interfaces are LimitedPrivate\{HDFS, MapReduce\} in that they
-     are private to the HDFS and MapReduce projects.
-
-   * Public
-
-     * The interface is for general use by any application.
-
-   Hadoop doesn't have a Company-Private classification,
-   which is meant for APIs which are intended to be used by other projects
-   within the company, since it doesn't apply to opensource projects. Also,
-   certain APIs are annotated as @VisibleForTesting (from com.google.common
-   .annotations.VisibleForTesting) - these are meant to be used strictly for
-   unit tests and should be treated as "Private" APIs.
-
-** Stability
-
-   Stability denotes how stable an interface is, as in when incompatible
-   changes to the interface are allowed. Hadoop APIs have the following
-   levels of stability.
-
-   * Stable
-
-     * Can evolve while retaining compatibility for minor release boundaries;
-     in other words, incompatible changes to APIs marked Stable are allowed
-     only at major releases (i.e. at m.0).
-
-   * Evolving
-
-     * Evolving, but incompatible changes are allowed at minor release (i.e. m
-     .x)
-
-   * Unstable
-
-     * Incompatible changes to Unstable APIs are allowed any time. This
-     usually makes sense for only private interfaces.
-
-     * However one may call this out for a supposedly public interface to
-     highlight that it should not be used as an interface; for public
-     interfaces, labeling it as Not-an-interface is probably more appropriate
-     than "Unstable".
-
-       * Examples of publicly visible interfaces that are unstable (i.e.
-       not-an-interface): GUI, CLIs whose output format will change
-
-   * Deprecated
-
-     * APIs that could potentially removed in the future and should not be
-     used.
-
-* How are the Classifications Recorded?
-
-  How will the classification be recorded for Hadoop APIs?
-
-  * Each interface or class will have the audience and stability recorded
-  using annotations in org.apache.hadoop.classification package.
-
-  * The javadoc generated by the maven target javadoc:javadoc lists only the
-  public API.
-
-  * One can derive the audience of java classes and java interfaces by the
-  audience of the package in which they are contained. Hence it is useful to
-  declare the audience of each java package as public or private (along with
-  the private audience variations).
-
-* FAQ
-
-  * Why aren’t the java scopes (private, package private and public) good
-  enough?
-
-    * Java’s scoping is not very complete. One is often forced to make a class
-    public in  order for other internal components to use it. It does not have
-    friends or sub-package-private like C++.
-
-  * But I can easily access a private implementation interface if it is Java
-  public. Where is the protection and control?
-
-    * The purpose of this is not providing absolute access control. Its purpose
-    is to communicate to users and developers. One can access private
-    implementation functions in libc; however if they change the internal
-    implementation details, your application will break and you will have little
-    sympathy from the folks who are supplying libc. If you use a non-public
-    interface you understand the risks.
-
-  * Why bother declaring the stability of a private interface?  Aren’t private
-  interfaces always unstable?
-
-    * Private interfaces are not always unstable. In the cases where they are
-    stable they capture internal properties of the system and can communicate
-    these properties to its internal users and to developers of the interface.
-
-      * e.g. In HDFS, NN-DN protocol is private but stable and can help
-      implement rolling upgrades. It communicates that this interface should not
-      be changed in incompatible ways even though it is private.
-
-      * e.g. In HDFS, FSImage stability can help provide more flexible roll
-      backs.
-
-  * What is the harm in applications using a private interface that is
-  stable? How is it different than a public stable interface?
-
-    * While a private interface marked as stable is targeted to change only at
-    major releases, it may break at other times if the providers of that
-    interface are willing to changes the internal users of that interface.
-    Further, a public stable interface is less likely to break even at major
-    releases (even though it is allowed to break compatibility) because the
-    impact of the change is larger. If you use a private interface (regardless
-    of its stability) you run the risk of incompatibility.
-
-  * Why bother with Limited-private? Isn’t it giving special treatment to some
-  projects? That is not fair.
-
-    * First, most interfaces should be public or private; actually let us state
-    it even stronger: make it private unless you really want to expose it to
-    public for general use.
-
-    * Limited-private is for interfaces that are not intended for general use.
-    They are exposed to related projects that need special hooks. Such a
-    classification has a cost to both the supplier and consumer of the limited
-    interface. Both will have to work together if ever there is a need to break
-    the interface in the future; for example the supplier and the consumers will
-    have to work together to get coordinated releases of their respective
-    projects. This should not be taken lightly – if you can get away with
-    private then do so; if the interface is really for general use for all
-    applications then do so. But remember that making an interface public has
-    huge responsibility. Sometimes Limited-private is just right.
-
-    * A good example of a limited-private interface is BlockLocations, This is
-    fairly low-level interface that we are willing to expose to MR and perhaps
-    HBase. We are likely to change it down  the road and at that time we will
-    have get a coordinated effort with the MR team to release matching releases.
-    While MR and HDFS are always released in sync today, they may change down
-    the road.
-
-    * If you have a limited-private interface with many projects listed then
-    you are fooling yourself. It is practically public.
-
-    * It might be worth declaring a special audience classification called
-    Hadoop-Private for the Hadoop family.
-
-  * Lets treat all private interfaces as Hadoop-private. What is the harm in
-  projects in the Hadoop family have access to private classes?
-
-    * Do we want MR accessing class files that are implementation details
-    inside HDFS. There used to be many such layer violations in the code that
-    we have been cleaning up over the last few years. We don’t want such
-    layer violations to creep back in by no separating between the major
-    components like HDFS and MR.
-
-  * Aren't all public interfaces stable?
-
-    * One may mark a public interface as evolving in its early days.
-    Here one is promising to make an effort to make compatible changes but may
-    need to break it at minor releases. 
-
-    * One example of a public interface that is unstable is where one is providing
-    an implementation of a standards-body based interface that is still under development.
-    For example, many companies, in an attampt to be first to market, 
-    have provided implementations of a new NFS protocol even when the protocol was not
-    fully completed by IETF.
-    The implementor cannot evolve the interface in a fashion that causes least distruption
-    because the stability is controlled by the standards body. Hence it is appropriate to
-    label the interface as unstable.