You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by iw...@apache.org on 2022/07/22 02:29:05 UTC

[hadoop] branch trunk updated: Make upstream aware of 3.2.4 release.

This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
     new 3cce41a1f68 Make upstream aware of 3.2.4 release.
3cce41a1f68 is described below

commit 3cce41a1f683951ba8c49011ed130e4a6dbd7715
Author: Masatake Iwasaki <iw...@apache.org>
AuthorDate: Fri Jul 22 02:26:23 2022 +0000

    Make upstream aware of 3.2.4 release.
    
    (cherry picked from commit e1637a57dfd41385dbce5de90620c48a45abb263)
---
 .../site/markdown/release/3.2.4/CHANGELOG.3.2.4.md |   213 +
 .../markdown/release/3.2.4/RELEASENOTES.3.2.4.md   |    55 +
 .../dev-support/jdiff/Apache_Hadoop_HDFS_3.2.4.xml |   674 +
 .../jdiff/Apache_Hadoop_MapReduce_Common_3.2.4.xml |   113 +
 .../jdiff/Apache_Hadoop_MapReduce_Core_3.2.4.xml   | 28073 +++++++++++++++++++
 .../Apache_Hadoop_MapReduce_JobClient_3.2.4.xml    |    16 +
 .../jdiff/Apache_Hadoop_YARN_API_3.2.4.xml         | 25332 +++++++++++++++++
 .../jdiff/Apache_Hadoop_YARN_Client_3.2.4.xml      |  3006 ++
 .../jdiff/Apache_Hadoop_YARN_Common_3.2.4.xml      |  3964 +++
 .../Apache_Hadoop_YARN_Server_Common_3.2.4.xml     |  1412 +
 10 files changed, 62858 insertions(+)

diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.4/CHANGELOG.3.2.4.md b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.4/CHANGELOG.3.2.4.md
new file mode 100644
index 00000000000..fc0079d1c9b
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.4/CHANGELOG.3.2.4.md
@@ -0,0 +1,213 @@
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+# Apache Hadoop Changelog
+
+## Release 3.2.4 - 2022-07-12
+
+
+
+### NEW FEATURES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HDFS-16337](https://issues.apache.org/jira/browse/HDFS-16337) | Show start time of Datanode on Web |  Minor | . | Tao Li | Tao Li |
+
+
+### IMPROVEMENTS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HDFS-15075](https://issues.apache.org/jira/browse/HDFS-15075) | Remove process command timing from BPServiceActor |  Major | . | Íñigo Goiri | Xiaoqiao He |
+| [HDFS-15150](https://issues.apache.org/jira/browse/HDFS-15150) | Introduce read write lock to Datanode |  Major | datanode | Stephen O'Donnell | Stephen O'Donnell |
+| [HDFS-16175](https://issues.apache.org/jira/browse/HDFS-16175) | Improve the configurable value of Server #PURGE\_INTERVAL\_NANOS |  Major | ipc | JiangHua Zhu | JiangHua Zhu |
+| [HDFS-16173](https://issues.apache.org/jira/browse/HDFS-16173) | Improve CopyCommands#Put#executor queue configurability |  Major | fs | JiangHua Zhu | JiangHua Zhu |
+| [HADOOP-17897](https://issues.apache.org/jira/browse/HADOOP-17897) | Allow nested blocks in switch case in checkstyle settings |  Minor | build | Masatake Iwasaki | Masatake Iwasaki |
+| [HADOOP-17857](https://issues.apache.org/jira/browse/HADOOP-17857) | Check real user ACLs in addition to proxied user ACLs |  Major | . | Eric Payne | Eric Payne |
+| [HDFS-14997](https://issues.apache.org/jira/browse/HDFS-14997) | BPServiceActor processes commands from NameNode asynchronously |  Major | datanode | Xiaoqiao He | Xiaoqiao He |
+| [HADOOP-17926](https://issues.apache.org/jira/browse/HADOOP-17926) | Maven-eclipse-plugin is no longer needed since Eclipse can import Maven projects by itself. |  Minor | documentation | Rintaro Ikeda | Rintaro Ikeda |
+| [YARN-10935](https://issues.apache.org/jira/browse/YARN-10935) | AM Total Queue Limit goes below per-user AM Limit if parent is full. |  Major | capacity scheduler, capacityscheduler | Eric Payne | Eric Payne |
+| [HDFS-16241](https://issues.apache.org/jira/browse/HDFS-16241) | Standby close reconstruction thread |  Major | . | zhanghuazong | zhanghuazong |
+| [YARN-1115](https://issues.apache.org/jira/browse/YARN-1115) | Provide optional means for a scheduler to check real user ACLs |  Major | capacity scheduler, scheduler | Eric Payne |  |
+| [HDFS-16279](https://issues.apache.org/jira/browse/HDFS-16279) | Print detail datanode info when process first storage report |  Minor | . | Tao Li | Tao Li |
+| [HDFS-16294](https://issues.apache.org/jira/browse/HDFS-16294) | Remove invalid DataNode#CONFIG\_PROPERTY\_SIMULATED |  Major | datanode | JiangHua Zhu | JiangHua Zhu |
+| [HDFS-16299](https://issues.apache.org/jira/browse/HDFS-16299) | Fix bug for TestDataNodeVolumeMetrics#verifyDataNodeVolumeMetrics |  Minor | . | Tao Li | Tao Li |
+| [HDFS-16301](https://issues.apache.org/jira/browse/HDFS-16301) | Improve BenchmarkThroughput#SIZE naming standardization |  Minor | benchmarks, test | JiangHua Zhu | JiangHua Zhu |
+| [YARN-10997](https://issues.apache.org/jira/browse/YARN-10997) | Revisit allocation and reservation logging |  Major | . | Andras Gyori | Andras Gyori |
+| [HDFS-16315](https://issues.apache.org/jira/browse/HDFS-16315) | Add metrics related to Transfer and NativeCopy for DataNode |  Major | . | Tao Li | Tao Li |
+| [HADOOP-17998](https://issues.apache.org/jira/browse/HADOOP-17998) | Allow get command to run with multi threads. |  Major | fs | Chengwei Wang | Chengwei Wang |
+| [HDFS-16345](https://issues.apache.org/jira/browse/HDFS-16345) | Fix test cases fail in TestBlockStoragePolicy |  Major | build | guophilipse | guophilipse |
+| [HADOOP-18035](https://issues.apache.org/jira/browse/HADOOP-18035) | Skip unit test failures to run all the unit tests |  Major | build | Akira Ajisaka | Akira Ajisaka |
+| [HADOOP-18040](https://issues.apache.org/jira/browse/HADOOP-18040) | Use maven.test.failure.ignore instead of ignoreTestFailure |  Major | build | Akira Ajisaka | Akira Ajisaka |
+| [HDFS-16352](https://issues.apache.org/jira/browse/HDFS-16352) | return the real datanode numBlocks in #getDatanodeStorageReport |  Major | . | qinyuren | qinyuren |
+| [HDFS-16386](https://issues.apache.org/jira/browse/HDFS-16386) | Reduce DataNode load when FsDatasetAsyncDiskService is working |  Major | datanode | JiangHua Zhu | JiangHua Zhu |
+| [HDFS-16391](https://issues.apache.org/jira/browse/HDFS-16391) | Avoid evaluation of LOG.debug statement in NameNodeHeartbeatService |  Trivial | . | wangzhaohui | wangzhaohui |
+| [YARN-8234](https://issues.apache.org/jira/browse/YARN-8234) | Improve RM system metrics publisher's performance by pushing events to timeline server in batch |  Critical | resourcemanager, timelineserver | Hu Ziqian | Ashutosh Gupta |
+| [HDFS-16430](https://issues.apache.org/jira/browse/HDFS-16430) | Validate maximum blocks in EC group when adding an EC policy |  Minor | ec, erasure-coding | daimin | daimin |
+| [HDFS-16403](https://issues.apache.org/jira/browse/HDFS-16403) | Improve FUSE IO performance by supporting FUSE parameter max\_background |  Minor | fuse-dfs | daimin | daimin |
+| [HADOOP-18136](https://issues.apache.org/jira/browse/HADOOP-18136) | Verify FileUtils.unTar() handling of missing .tar files |  Minor | test, util | Steve Loughran | Steve Loughran |
+| [HDFS-16529](https://issues.apache.org/jira/browse/HDFS-16529) | Remove unnecessary setObserverRead in TestConsistentReadsObserver |  Trivial | test | wangzhaohui | wangzhaohui |
+| [HDFS-16530](https://issues.apache.org/jira/browse/HDFS-16530) | setReplication debug log creates a new string even if debug is disabled |  Major | namenode | Stephen O'Donnell | Stephen O'Donnell |
+| [HDFS-16427](https://issues.apache.org/jira/browse/HDFS-16427) | Add debug log for BlockManager#chooseExcessRedundancyStriped |  Minor | erasure-coding | Tao Li | Tao Li |
+| [HDFS-16389](https://issues.apache.org/jira/browse/HDFS-16389) | Improve NNThroughputBenchmark test mkdirs |  Major | benchmarks, namenode | JiangHua Zhu | JiangHua Zhu |
+| [MAPREDUCE-7373](https://issues.apache.org/jira/browse/MAPREDUCE-7373) | Building MapReduce NativeTask fails on Fedora 34+ |  Major | build, nativetask | Kengo Seki | Kengo Seki |
+| [HDFS-16355](https://issues.apache.org/jira/browse/HDFS-16355) | Improve the description of dfs.block.scanner.volume.bytes.per.second |  Minor | documentation, hdfs | guophilipse | guophilipse |
+| [HADOOP-18088](https://issues.apache.org/jira/browse/HADOOP-18088) | Replace log4j 1.x with reload4j |  Major | . | Wei-Chiu Chuang | Wei-Chiu Chuang |
+| [HDFS-16501](https://issues.apache.org/jira/browse/HDFS-16501) | Print the exception when reporting a bad block |  Major | datanode | qinyuren | qinyuren |
+| [YARN-11116](https://issues.apache.org/jira/browse/YARN-11116) | Migrate Times util from SimpleDateFormat to thread-safe DateTimeFormatter class |  Minor | . | Jonathan Turner Eagles | Jonathan Turner Eagles |
+| [YARN-10080](https://issues.apache.org/jira/browse/YARN-10080) | Support show app id on localizer thread pool |  Major | nodemanager | zhoukang | Ashutosh Gupta |
+| [HADOOP-18240](https://issues.apache.org/jira/browse/HADOOP-18240) | Upgrade Yetus to 0.14.0 |  Major | build | Akira Ajisaka | Ashutosh Gupta |
+| [HDFS-16585](https://issues.apache.org/jira/browse/HDFS-16585) | Add @VisibleForTesting in Dispatcher.java after HDFS-16268 |  Trivial | . | Wei-Chiu Chuang | Ashutosh Gupta |
+| [HDFS-16610](https://issues.apache.org/jira/browse/HDFS-16610) | Make fsck read timeout configurable |  Major | hdfs-client | Stephen O'Donnell | Stephen O'Donnell |
+
+
+### BUG FIXES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HDFS-13983](https://issues.apache.org/jira/browse/HDFS-13983) | TestOfflineImageViewer crashes in windows |  Major | . | Vinayakumar B | Vinayakumar B |
+| [YARN-9744](https://issues.apache.org/jira/browse/YARN-9744) | RollingLevelDBTimelineStore.getEntityByTime fails with NPE |  Major | timelineserver | Prabhu Joseph | Prabhu Joseph |
+| [HDFS-15113](https://issues.apache.org/jira/browse/HDFS-15113) | Missing IBR when NameNode restart if open processCommand async feature |  Blocker | datanode | Xiaoqiao He | Xiaoqiao He |
+| [HADOOP-16985](https://issues.apache.org/jira/browse/HADOOP-16985) | Handle release package related issues |  Major | . | Vinayakumar B | Vinayakumar B |
+| [HADOOP-17116](https://issues.apache.org/jira/browse/HADOOP-17116) | Skip Retry INFO logging on first failover from a proxy |  Major | ha | Hanisha Koneru | Hanisha Koneru |
+| [HDFS-15651](https://issues.apache.org/jira/browse/HDFS-15651) | Client could not obtain block when DN CommandProcessingThread exit |  Major | . | Yiqun Lin | Mingxiang Li |
+| [HDFS-15963](https://issues.apache.org/jira/browse/HDFS-15963) | Unreleased volume references cause an infinite loop |  Critical | datanode | Shuyan Zhang | Shuyan Zhang |
+| [HDFS-14575](https://issues.apache.org/jira/browse/HDFS-14575) | LeaseRenewer#daemon threads leak in DFSClient |  Major | . | Tao Yang | Renukaprasad C |
+| [HADOOP-17796](https://issues.apache.org/jira/browse/HADOOP-17796) | Upgrade jetty version to 9.4.43 |  Major | . | Wei-Chiu Chuang | Renukaprasad C |
+| [HDFS-15175](https://issues.apache.org/jira/browse/HDFS-15175) | Multiple CloseOp shared block instance causes the standby namenode to crash when rolling editlog |  Critical | . | Yicong Cai | Wan Chang |
+| [HDFS-16177](https://issues.apache.org/jira/browse/HDFS-16177) | Bug fix for Util#receiveFile |  Minor | . | Tao Li | Tao Li |
+| [YARN-10814](https://issues.apache.org/jira/browse/YARN-10814) | YARN shouldn't start with empty hadoop.http.authentication.signature.secret.file |  Major | . | Benjamin Teke | Tamas Domok |
+| [HADOOP-17874](https://issues.apache.org/jira/browse/HADOOP-17874) | ExceptionsHandler to add terse/suppressed Exceptions in thread-safe manner |  Major | . | Viraj Jasani | Viraj Jasani |
+| [HADOOP-15129](https://issues.apache.org/jira/browse/HADOOP-15129) | Datanode caches namenode DNS lookup failure and cannot startup |  Minor | ipc | Karthik Palaniappan | Chris Nauroth |
+| [YARN-10901](https://issues.apache.org/jira/browse/YARN-10901) | Permission checking error on an existing directory in LogAggregationFileController#verifyAndCreateRemoteLogDir |  Major | nodemanager | Tamas Domok | Tamas Domok |
+| [HDFS-16207](https://issues.apache.org/jira/browse/HDFS-16207) | Remove NN logs stack trace for non-existent xattr query |  Major | namenode | Ahmed Hussein | Ahmed Hussein |
+| [HDFS-16187](https://issues.apache.org/jira/browse/HDFS-16187) | SnapshotDiff behaviour with Xattrs and Acls is not consistent across NN restarts with checkpointing |  Major | snapshots | Srinivasu Majeti | Shashikant Banerjee |
+| [HDFS-16198](https://issues.apache.org/jira/browse/HDFS-16198) | Short circuit read leaks Slot objects when InvalidToken exception is thrown |  Major | . | Eungsop Yoo | Eungsop Yoo |
+| [YARN-10870](https://issues.apache.org/jira/browse/YARN-10870) | Missing user filtering check -\> yarn.webapp.filter-entity-list-by-user for RM Scheduler page |  Major | yarn | Siddharth Ahuja | Gergely Pollák |
+| [HADOOP-17919](https://issues.apache.org/jira/browse/HADOOP-17919) | Fix command line example in Hadoop Cluster Setup documentation |  Minor | documentation | Rintaro Ikeda | Rintaro Ikeda |
+| [HDFS-16235](https://issues.apache.org/jira/browse/HDFS-16235) | Deadlock in LeaseRenewer for static remove method |  Major | hdfs | angerszhu | angerszhu |
+| [HDFS-16181](https://issues.apache.org/jira/browse/HDFS-16181) | [SBN Read] Fix metric of RpcRequestCacheMissAmount can't display when tailEditLog form JN |  Critical | . | wangzhaohui | wangzhaohui |
+| [HADOOP-17925](https://issues.apache.org/jira/browse/HADOOP-17925) | BUILDING.txt should not encourage to activate docs profile on building binary artifacts |  Minor | documentation | Rintaro Ikeda | Masatake Iwasaki |
+| [HADOOP-16532](https://issues.apache.org/jira/browse/HADOOP-16532) | Fix TestViewFsTrash to use the correct homeDir. |  Minor | test, viewfs | Steve Loughran | Xing Lin |
+| [HDFS-16268](https://issues.apache.org/jira/browse/HDFS-16268) | Balancer stuck when moving striped blocks due to NPE |  Major | balancer & mover, erasure-coding | Leon Gao | Leon Gao |
+| [HDFS-7612](https://issues.apache.org/jira/browse/HDFS-7612) | TestOfflineEditsViewer.testStored() uses incorrect default value for cacheDir |  Major | test | Konstantin Shvachko | Michael Kuchenbecker |
+| [HDFS-16311](https://issues.apache.org/jira/browse/HDFS-16311) | Metric metadataOperationRate calculation error in DataNodeVolumeMetrics |  Major | . | Tao Li | Tao Li |
+| [HDFS-16182](https://issues.apache.org/jira/browse/HDFS-16182) | numOfReplicas is given the wrong value in  BlockPlacementPolicyDefault$chooseTarget can cause DataStreamer to fail with Heterogeneous Storage |  Major | namanode | Max  Xie | Max  Xie |
+| [HADOOP-17999](https://issues.apache.org/jira/browse/HADOOP-17999) | No-op implementation of setWriteChecksum and setVerifyChecksum in ViewFileSystem |  Major | . | Abhishek Das | Abhishek Das |
+| [HDFS-16329](https://issues.apache.org/jira/browse/HDFS-16329) | Fix log format for BlockManager |  Minor | . | Tao Li | Tao Li |
+| [HDFS-16330](https://issues.apache.org/jira/browse/HDFS-16330) | Fix incorrect placeholder for Exception logs in DiskBalancer |  Major | . | Viraj Jasani | Viraj Jasani |
+| [HDFS-16328](https://issues.apache.org/jira/browse/HDFS-16328) | Correct disk balancer param desc |  Minor | documentation, hdfs | guophilipse | guophilipse |
+| [HDFS-16343](https://issues.apache.org/jira/browse/HDFS-16343) | Add some debug logs when the dfsUsed are not used during Datanode startup |  Major | datanode | Mukul Kumar Singh | Mukul Kumar Singh |
+| [YARN-10991](https://issues.apache.org/jira/browse/YARN-10991) | Fix to ignore the grouping "[]" for resourcesStr in parseResourcesString method |  Minor | distributed-shell | Ashutosh Gupta | Ashutosh Gupta |
+| [HADOOP-17975](https://issues.apache.org/jira/browse/HADOOP-17975) | Fallback to simple auth does not work for a secondary DistributedFileSystem instance |  Major | ipc | István Fajth | István Fajth |
+| [HDFS-16350](https://issues.apache.org/jira/browse/HDFS-16350) | Datanode start time should be set after RPC server starts successfully |  Minor | . | Viraj Jasani | Viraj Jasani |
+| [YARN-11007](https://issues.apache.org/jira/browse/YARN-11007) | Correct words in YARN documents |  Minor | documentation | guophilipse | guophilipse |
+| [HDFS-16332](https://issues.apache.org/jira/browse/HDFS-16332) | Expired block token causes slow read due to missing handling in sasl handshake |  Major | datanode, dfs, dfsclient | Shinya Yoshida | Shinya Yoshida |
+| [YARN-9063](https://issues.apache.org/jira/browse/YARN-9063) | ATS 1.5 fails to start if RollingLevelDb files are corrupt or missing |  Major | timelineserver, timelineservice | Tarun Parimi | Ashutosh Gupta |
+| [HDFS-16333](https://issues.apache.org/jira/browse/HDFS-16333) | fix balancer bug when transfer an EC block |  Major | balancer & mover, erasure-coding | qinyuren | qinyuren |
+| [HDFS-16373](https://issues.apache.org/jira/browse/HDFS-16373) | Fix MiniDFSCluster restart in case of multiple namenodes |  Major | . | Ayush Saxena | Ayush Saxena |
+| [HDFS-16377](https://issues.apache.org/jira/browse/HDFS-16377) | Should CheckNotNull before access FsDatasetSpi |  Major | . | Tao Li | Tao Li |
+| [YARN-6862](https://issues.apache.org/jira/browse/YARN-6862) | Nodemanager resource usage metrics sometimes are negative |  Major | nodemanager | YunFan Zhou | Benjamin Teke |
+| [YARN-10178](https://issues.apache.org/jira/browse/YARN-10178) | Global Scheduler async thread crash caused by 'Comparison method violates its general contract |  Major | capacity scheduler | tuyu | Andras Gyori |
+| [HDFS-16395](https://issues.apache.org/jira/browse/HDFS-16395) | Remove useless NNThroughputBenchmark#dummyActionNoSynch() |  Major | benchmarks, namenode | JiangHua Zhu | JiangHua Zhu |
+| [HADOOP-18063](https://issues.apache.org/jira/browse/HADOOP-18063) | Remove unused import AbstractJavaKeyStoreProvider in Shell class |  Minor | . | JiangHua Zhu | JiangHua Zhu |
+| [HDFS-16409](https://issues.apache.org/jira/browse/HDFS-16409) | Fix typo: testHasExeceptionsReturnsCorrectValue -\> testHasExceptionsReturnsCorrectValue |  Trivial | . | Ashutosh Gupta | Ashutosh Gupta |
+| [HDFS-16408](https://issues.apache.org/jira/browse/HDFS-16408) | Ensure LeaseRecheckIntervalMs is greater than zero |  Major | namenode | Jingxuan Fu | Jingxuan Fu |
+| [YARN-11055](https://issues.apache.org/jira/browse/YARN-11055) | In cgroups-operations.c some fprintf format strings don't end with "\\n" |  Minor | nodemanager | Gera Shegalov | Gera Shegalov |
+| [HDFS-16303](https://issues.apache.org/jira/browse/HDFS-16303) | Losing over 100 datanodes in state decommissioning results in full blockage of all datanode decommissioning |  Major | . | Kevin Wikant | Kevin Wikant |
+| [HDFS-16443](https://issues.apache.org/jira/browse/HDFS-16443) | Fix edge case where DatanodeAdminDefaultMonitor doubly enqueues a DatanodeDescriptor on exception |  Major | hdfs | Kevin Wikant | Kevin Wikant |
+| [HDFS-16449](https://issues.apache.org/jira/browse/HDFS-16449) | Fix hadoop web site release notes and changelog not available |  Minor | documentation | guophilipse | guophilipse |
+| [HADOOP-18192](https://issues.apache.org/jira/browse/HADOOP-18192) | Fix multiple\_bindings warning about slf4j-reload4j |  Major | . | Masatake Iwasaki | Masatake Iwasaki |
+| [HDFS-16479](https://issues.apache.org/jira/browse/HDFS-16479) | EC: NameNode should not send a reconstruction work when the source datanodes are insufficient |  Critical | ec, erasure-coding | Yuanbo Liu | Takanobu Asanuma |
+| [HDFS-16509](https://issues.apache.org/jira/browse/HDFS-16509) | Fix decommission UnsupportedOperationException: Remove unsupported |  Major | namenode | daimin | daimin |
+| [HDFS-16456](https://issues.apache.org/jira/browse/HDFS-16456) | EC: Decommission a rack with only on dn will fail when the rack number is equal with replication |  Critical | ec, namenode | caozhiqiang | caozhiqiang |
+| [HDFS-16437](https://issues.apache.org/jira/browse/HDFS-16437) | ReverseXML processor doesn't accept XML files without the SnapshotDiffSection. |  Critical | hdfs | yanbin.zhang | yanbin.zhang |
+| [HDFS-16507](https://issues.apache.org/jira/browse/HDFS-16507) | [SBN read] Avoid purging edit log which is in progress |  Critical | . | Tao Li | Tao Li |
+| [YARN-10720](https://issues.apache.org/jira/browse/YARN-10720) | YARN WebAppProxyServlet should support connection timeout to prevent proxy server from hanging |  Critical | . | Qi Zhu | Qi Zhu |
+| [HDFS-16428](https://issues.apache.org/jira/browse/HDFS-16428) | Source path with storagePolicy cause wrong typeConsumed while rename |  Major | hdfs, namenode | lei w | lei w |
+| [YARN-11014](https://issues.apache.org/jira/browse/YARN-11014) | YARN incorrectly validates maximum capacity resources on the validation API |  Major | . | Benjamin Teke | Benjamin Teke |
+| [YARN-11075](https://issues.apache.org/jira/browse/YARN-11075) | Explicitly declare serialVersionUID in LogMutation class |  Major | . | Benjamin Teke | Benjamin Teke |
+| [HDFS-11041](https://issues.apache.org/jira/browse/HDFS-11041) | Unable to unregister FsDatasetState MBean if DataNode is shutdown twice |  Trivial | datanode | Wei-Chiu Chuang | Wei-Chiu Chuang |
+| [HDFS-16538](https://issues.apache.org/jira/browse/HDFS-16538) |  EC decoding failed due to not enough valid inputs |  Major | erasure-coding | qinyuren | qinyuren |
+| [HDFS-16544](https://issues.apache.org/jira/browse/HDFS-16544) | EC decoding failed due to invalid buffer |  Major | erasure-coding | qinyuren | qinyuren |
+| [HDFS-16546](https://issues.apache.org/jira/browse/HDFS-16546) | Fix UT TestOfflineImageViewer#testReverseXmlWithoutSnapshotDiffSection to branch branch-3.2 |  Major | test | daimin | daimin |
+| [HDFS-16552](https://issues.apache.org/jira/browse/HDFS-16552) | Fix NPE for TestBlockManager |  Major | . | Tao Li | Tao Li |
+| [MAPREDUCE-7246](https://issues.apache.org/jira/browse/MAPREDUCE-7246) |  In MapredAppMasterRest#Mapreduce\_Application\_Master\_Info\_API, the datatype of appId should be "string". |  Major | documentation | jenny | Ashutosh Gupta |
+| [YARN-10187](https://issues.apache.org/jira/browse/YARN-10187) | Removing hadoop-yarn-project/hadoop-yarn/README as it is no longer maintained. |  Minor | documentation | N Sanketh Reddy | Ashutosh Gupta |
+| [HDFS-16185](https://issues.apache.org/jira/browse/HDFS-16185) | Fix comment in LowRedundancyBlocks.java |  Minor | documentation | Akira Ajisaka | Ashutosh Gupta |
+| [HADOOP-17479](https://issues.apache.org/jira/browse/HADOOP-17479) | Fix the examples of hadoop config prefix |  Minor | documentation | Akira Ajisaka | Ashutosh Gupta |
+| [HDFS-16579](https://issues.apache.org/jira/browse/HDFS-16579) | Fix build failure for TestBlockManager on branch-3.2 |  Major | . | Tao Li | Tao Li |
+| [YARN-11092](https://issues.apache.org/jira/browse/YARN-11092) | Upgrade jquery ui to 1.13.1 |  Major | . | D M Murali Krishna Reddy | Ashutosh Gupta |
+| [YARN-11133](https://issues.apache.org/jira/browse/YARN-11133) | YarnClient gets the wrong EffectiveMinCapacity value |  Major | api | Zilong Zhu | Zilong Zhu |
+| [YARN-10850](https://issues.apache.org/jira/browse/YARN-10850) | TimelineService v2 lists containers for all attempts when filtering for one |  Major | timelinereader | Benjamin Teke | Benjamin Teke |
+| [YARN-11126](https://issues.apache.org/jira/browse/YARN-11126) | ZKConfigurationStore Java deserialisation vulnerability |  Major | yarn | Tamas Domok | Tamas Domok |
+| [YARN-11162](https://issues.apache.org/jira/browse/YARN-11162) | Set the zk acl for nodes created by ZKConfigurationStore. |  Major | resourcemanager | Owen O'Malley | Owen O'Malley |
+| [HDFS-16586](https://issues.apache.org/jira/browse/HDFS-16586) | Purge FsDatasetAsyncDiskService threadgroup; it causes BPServiceActor$CommandProcessingThread IllegalThreadStateException 'fatal exception and exit' |  Major | datanode | Michael Stack | Michael Stack |
+| [HADOOP-18251](https://issues.apache.org/jira/browse/HADOOP-18251) | Fix failure of extracting JIRA id from commit message in git\_jira\_fix\_version\_check.py |  Minor | build | Masatake Iwasaki | Masatake Iwasaki |
+| [HDFS-16583](https://issues.apache.org/jira/browse/HDFS-16583) | DatanodeAdminDefaultMonitor can get stuck in an infinite loop |  Major | . | Stephen O'Donnell | Stephen O'Donnell |
+| [HDFS-16623](https://issues.apache.org/jira/browse/HDFS-16623) | IllegalArgumentException in LifelineSender |  Major | . | ZanderXu | ZanderXu |
+| [HDFS-16064](https://issues.apache.org/jira/browse/HDFS-16064) | Determine when to invalidate corrupt replicas based on number of usable replicas |  Major | datanode, namenode | Kevin Wikant | Kevin Wikant |
+| [HADOOP-18100](https://issues.apache.org/jira/browse/HADOOP-18100) | Change scope of inner classes in InodeTree to make them accessible outside package |  Major | . | Abhishek Das | Abhishek Das |
+| [HADOOP-18334](https://issues.apache.org/jira/browse/HADOOP-18334) | Fix create-release to address removal of GPG\_AGENT\_INFO in branch-3.2 |  Major | build | Masatake Iwasaki | Masatake Iwasaki |
+
+
+### TESTS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [MAPREDUCE-7342](https://issues.apache.org/jira/browse/MAPREDUCE-7342) | Stop RMService in TestClientRedirect.testRedirect() |  Minor | . | Zhengxi Li | Zhengxi Li |
+| [MAPREDUCE-7311](https://issues.apache.org/jira/browse/MAPREDUCE-7311) | Fix non-idempotent test in TestTaskProgressReporter |  Minor | . | Zhengxi Li | Zhengxi Li |
+| [HDFS-15862](https://issues.apache.org/jira/browse/HDFS-15862) | Make TestViewfsWithNfs3.testNfsRenameSingleNN() idempotent |  Minor | nfs | Zhengxi Li | Zhengxi Li |
+
+
+### SUB-TASKS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HDFS-15457](https://issues.apache.org/jira/browse/HDFS-15457) | TestFsDatasetImpl fails intermittently |  Major | hdfs | Ahmed Hussein | Ahmed Hussein |
+| [HDFS-15818](https://issues.apache.org/jira/browse/HDFS-15818) | Fix TestFsDatasetImpl.testReadLockCanBeDisabledByConfig |  Minor | test | Leon Gao | Leon Gao |
+| [YARN-10503](https://issues.apache.org/jira/browse/YARN-10503) | Support queue capacity in terms of absolute resources with custom resourceType. |  Critical | . | Qi Zhu | Qi Zhu |
+| [HADOOP-17126](https://issues.apache.org/jira/browse/HADOOP-17126) | implement non-guava Precondition checkNotNull |  Major | . | Ahmed Hussein | Ahmed Hussein |
+| [HADOOP-17929](https://issues.apache.org/jira/browse/HADOOP-17929) | implement non-guava Precondition checkArgument |  Major | . | Ahmed Hussein | Ahmed Hussein |
+| [HADOOP-17947](https://issues.apache.org/jira/browse/HADOOP-17947) | Provide alternative to Guava VisibleForTesting |  Major | . | Viraj Jasani | Viraj Jasani |
+| [HADOOP-17930](https://issues.apache.org/jira/browse/HADOOP-17930) | implement non-guava Precondition checkState |  Major | . | Ahmed Hussein | Ahmed Hussein |
+| [HADOOP-17374](https://issues.apache.org/jira/browse/HADOOP-17374) | AliyunOSS: support ListObjectsV2 |  Major | fs/oss | wujinhu | wujinhu |
+| [HDFS-16336](https://issues.apache.org/jira/browse/HDFS-16336) | De-flake TestRollingUpgrade#testRollback |  Minor | hdfs, test | Kevin Wikant | Viraj Jasani |
+| [HDFS-16171](https://issues.apache.org/jira/browse/HDFS-16171) | De-flake testDecommissionStatus |  Major | . | Viraj Jasani | Viraj Jasani |
+| [HDFS-16169](https://issues.apache.org/jira/browse/HDFS-16169) | Fix TestBlockTokenWithDFSStriped#testEnd2End failure |  Major | test | Hui Fei | secfree |
+| [HDFS-16484](https://issues.apache.org/jira/browse/HDFS-16484) | [SPS]: Fix an infinite loop bug in SPSPathIdProcessor thread |  Major | . | qinyuren | qinyuren |
+| [HADOOP-16663](https://issues.apache.org/jira/browse/HADOOP-16663) | Backport "HADOOP-16560 [YARN] use protobuf-maven-plugin to generate protobuf classes" to all active branches |  Major | . | Duo Zhang | Duo Zhang |
+| [HADOOP-16664](https://issues.apache.org/jira/browse/HADOOP-16664) | Backport "HADOOP-16561 [MAPREDUCE] use protobuf-maven-plugin to generate protobuf classes" to all active branches |  Major | . | Duo Zhang | Duo Zhang |
+
+
+### OTHER:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HDFS-16298](https://issues.apache.org/jira/browse/HDFS-16298) | Improve error msg for BlockMissingException |  Minor | . | Tao Li | Tao Li |
+| [HDFS-16312](https://issues.apache.org/jira/browse/HDFS-16312) | Fix typo for DataNodeVolumeMetrics and ProfilingFileIoEvents |  Minor | . | Tao Li | Tao Li |
+| [HDFS-16326](https://issues.apache.org/jira/browse/HDFS-16326) | Simplify the code for DiskBalancer |  Minor | . | Tao Li | Tao Li |
+| [HDFS-16339](https://issues.apache.org/jira/browse/HDFS-16339) | Show the threshold when mover threads quota is exceeded |  Minor | . | Tao Li | Tao Li |
+| [YARN-10820](https://issues.apache.org/jira/browse/YARN-10820) | Make GetClusterNodesRequestPBImpl thread safe |  Major | client | Prabhu Joseph | SwathiChandrashekar |
+| [HADOOP-13464](https://issues.apache.org/jira/browse/HADOOP-13464) | update GSON to 2.7+ |  Minor | build | Sean Busbey | Igor Dvorzhak |
+| [HADOOP-18191](https://issues.apache.org/jira/browse/HADOOP-18191) | Log retry count while handling exceptions in RetryInvocationHandler |  Minor | . | Viraj Jasani | Viraj Jasani |
+| [HDFS-16551](https://issues.apache.org/jira/browse/HDFS-16551) | Backport HADOOP-17588 to 3.3 and other active old branches. |  Major | . | Renukaprasad C | Renukaprasad C |
+| [HDFS-16618](https://issues.apache.org/jira/browse/HDFS-16618) | sync\_file\_range error should include more volume and file info |  Minor | . | Viraj Jasani | Viraj Jasani |
+| [HADOOP-18300](https://issues.apache.org/jira/browse/HADOOP-18300) | Update Gson to 2.9.0 |  Minor | build | Igor Dvorzhak | Igor Dvorzhak |
+
+
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.4/RELEASENOTES.3.2.4.md b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.4/RELEASENOTES.3.2.4.md
new file mode 100644
index 00000000000..fac976d655d
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/release/3.2.4/RELEASENOTES.3.2.4.md
@@ -0,0 +1,55 @@
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+# Apache Hadoop  3.2.4 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements.
+
+
+---
+
+* [YARN-10820](https://issues.apache.org/jira/browse/YARN-10820) | *Major* | **Make GetClusterNodesRequestPBImpl thread safe**
+
+Added syncronization so that the "yarn node list" command does not fail intermittently
+
+
+---
+
+* [YARN-8234](https://issues.apache.org/jira/browse/YARN-8234) | *Critical* | **Improve RM system metrics publisher's performance by pushing events to timeline server in batch**
+
+When Timeline Service V1 or V1.5 is used, if "yarn.resourcemanager.system-metrics-publisher.timeline-server-v1.enable-batch" is set to true, ResourceManager sends timeline events in batch. The default value is false. If this functionality is enabled, the maximum number that events published in batch is configured by "yarn.resourcemanager.system-metrics-publisher.timeline-server-v1.batch-size". The default value is 1000. The interval of publishing events can be configured by "yarn.resourc [...]
+
+
+---
+
+* [HADOOP-18088](https://issues.apache.org/jira/browse/HADOOP-18088) | *Major* | **Replace log4j 1.x with reload4j**
+
+log4j 1 was replaced with reload4j which is fork of log4j 1.2.17 with the goal of fixing pressing security issues.
+
+If you are depending on the hadoop artifacts in your build were explicitly excluding log4 artifacts, and now want to exclude the reload4j files, you will need to update your exclusion lists
+\<exclusion\>
+  \<groupId\>org.slf4j\</groupId\>
+  \<artifactId\>slf4j-reload4j\</artifactId\>
+\</exclusion\>
+\<exclusion\>
+  \<groupId\>ch.qos.reload4j\</groupId\>
+  \<artifactId\>reload4j\</artifactId\>
+\</exclusion\>
+
+
+
diff --git a/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.2.4.xml b/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.2.4.xml
new file mode 100644
index 00000000000..2aa6ef1cdb5
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff/Apache_Hadoop_HDFS_3.2.4.xml
@@ -0,0 +1,674 @@
+<?xml version="1.0" encoding="iso-8859-1" standalone="no"?>
+<!-- Generated by the JDiff Javadoc doclet -->
+<!-- (http://www.jdiff.org) -->
+<!-- on Tue Jul 12 12:52:30 GMT 2022 -->
+
+<api
+  xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
+  xsi:noNamespaceSchemaLocation='api.xsd'
+  name="Apache Hadoop HDFS 3.2.4"
+  jdversion="1.0.9">
+
+<!--  Command line arguments =  -doclet org.apache.hadoop.classification.tools.IncludePublicAnnotationsJDiffDoclet -docletpath /build/source/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar:/build/source/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar -verbose -classpath /build/source/hadoop-hdfs-project/hadoop-hdfs/target/classes:/build/source/hadoop-common-project/hadoop-annotations/target/hadoop-annotations-3.2.4.jar:/usr/lib/jvm/java-8-openjdk-amd64/lib/tools.jar:/build [...]
+<package name="org.apache.hadoop.hdfs">
+  <doc>
+  <![CDATA[<p>A distributed implementation of {@link
+org.apache.hadoop.fs.FileSystem}.  This is loosely modelled after
+Google's <a href="http://research.google.com/archive/gfs.html">GFS</a>.</p>
+
+<p>The most important difference is that unlike GFS, Hadoop DFS files 
+have strictly one writer at any one time.  Bytes are always appended 
+to the end of the writer's stream.  There is no notion of "record appends"
+or "mutations" that are then checked or reordered.  Writers simply emit 
+a byte stream.  That byte stream is guaranteed to be stored in the 
+order written.</p>]]>
+  </doc>
+</package>
+<package name="org.apache.hadoop.hdfs.net">
+</package>
+<package name="org.apache.hadoop.hdfs.protocol">
+</package>
+<package name="org.apache.hadoop.hdfs.protocol.datatransfer">
+</package>
+<package name="org.apache.hadoop.hdfs.protocol.datatransfer.sasl">
+</package>
+<package name="org.apache.hadoop.hdfs.protocolPB">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.client">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.protocol">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.protocolPB">
+</package>
+<package name="org.apache.hadoop.hdfs.qjournal.server">
+  <!-- start interface org.apache.hadoop.hdfs.qjournal.server.JournalNodeMXBean -->
+  <interface name="JournalNodeMXBean"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="getJournalsStatus" return="java.lang.String"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get status information (e.g., whether formatted) of JournalNode's journals.
+ 
+ @return A string presenting status for each journal]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[This is the JMX management interface for JournalNode information]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.hdfs.qjournal.server.JournalNodeMXBean -->
+</package>
+<package name="org.apache.hadoop.hdfs.security.token.block">
+</package>
+<package name="org.apache.hadoop.hdfs.security.token.delegation">
+</package>
+<package name="org.apache.hadoop.hdfs.server.aliasmap">
+  <!-- start class org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap -->
+  <class name="InMemoryAliasMap" extends="java.lang.Object"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMapProtocol"/>
+    <implements name="org.apache.hadoop.conf.Configurable"/>
+    <method name="setConf"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+    </method>
+    <method name="getConf" return="org.apache.hadoop.conf.Configuration"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="init" return="org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="blockPoolID" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="list" return="org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMapProtocol.IterationResult"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="marker" type="java.util.Optional"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="read" return="java.util.Optional"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="block" type="org.apache.hadoop.hdfs.protocol.Block"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="write"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="block" type="org.apache.hadoop.hdfs.protocol.Block"/>
+      <param name="providedStorageLocation" type="org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getBlockPoolId" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="close"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="fromProvidedStorageLocationBytes" return="org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="providedStorageLocationDbFormat" type="byte[]"/>
+      <exception name="InvalidProtocolBufferException" type="com.google.protobuf.InvalidProtocolBufferException"/>
+    </method>
+    <method name="fromBlockBytes" return="org.apache.hadoop.hdfs.protocol.Block"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="blockDbFormat" type="byte[]"/>
+      <exception name="InvalidProtocolBufferException" type="com.google.protobuf.InvalidProtocolBufferException"/>
+    </method>
+    <method name="toProtoBufBytes" return="byte[]"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="providedStorageLocation" type="org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="toProtoBufBytes" return="byte[]"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="block" type="org.apache.hadoop.hdfs.protocol.Block"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <doc>
+    <![CDATA[InMemoryAliasMap is an implementation of the InMemoryAliasMapProtocol for
+ use with LevelDB.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap -->
+</package>
+<package name="org.apache.hadoop.hdfs.server.balancer">
+</package>
+<package name="org.apache.hadoop.hdfs.server.blockmanagement">
+</package>
+<package name="org.apache.hadoop.hdfs.server.common">
+  <!-- start interface org.apache.hadoop.hdfs.server.common.BlockAlias -->
+  <interface name="BlockAlias"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="getBlock" return="org.apache.hadoop.hdfs.protocol.Block"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <doc>
+    <![CDATA[Interface used to load provided blocks.]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.hdfs.server.common.BlockAlias -->
+  <!-- start class org.apache.hadoop.hdfs.server.common.FileRegion -->
+  <class name="FileRegion" extends="java.lang.Object"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.hdfs.server.common.BlockAlias"/>
+    <constructor name="FileRegion" type="long, org.apache.hadoop.fs.Path, long, long, long"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <constructor name="FileRegion" type="long, org.apache.hadoop.fs.Path, long, long, long, byte[]"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <constructor name="FileRegion" type="long, org.apache.hadoop.fs.Path, long, long"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <constructor name="FileRegion" type="org.apache.hadoop.hdfs.protocol.Block, org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getBlock" return="org.apache.hadoop.hdfs.protocol.Block"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="getProvidedStorageLocation" return="org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="equals" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="o" type="java.lang.Object"/>
+    </method>
+    <method name="hashCode" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <doc>
+    <![CDATA[This class is used to represent provided blocks that are file regions,
+ i.e., can be described using (path, offset, length).]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.common.FileRegion -->
+</package>
+<package name="org.apache.hadoop.hdfs.server.common.blockaliasmap">
+  <!-- start class org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap -->
+  <class name="BlockAliasMap" extends="java.lang.Object"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="BlockAliasMap"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getReader" return="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Reader"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="opts" type="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Reader.Options"/>
+      <param name="blockPoolID" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Returns a reader to the alias map.
+ @param opts reader options
+ @param blockPoolID block pool id to use
+ @return {@link Reader} to the alias map. If a Reader for the blockPoolID
+ cannot be created, this will return null.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getWriter" return="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Writer"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="opts" type="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Writer.Options"/>
+      <param name="blockPoolID" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Returns the writer for the alias map.
+ @param opts writer options.
+ @param blockPoolID block pool id to use
+ @return {@link Writer} to the alias map.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="refresh"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Refresh the alias map.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="close"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <doc>
+    <![CDATA[An abstract class used to read and write block maps for provided blocks.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap -->
+</package>
+<package name="org.apache.hadoop.hdfs.server.common.blockaliasmap.impl">
+  <!-- start class org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap -->
+  <class name="LevelDBFileRegionAliasMap" extends="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.conf.Configurable"/>
+    <constructor name="LevelDBFileRegionAliasMap"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="setConf"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+    </method>
+    <method name="getConf" return="org.apache.hadoop.conf.Configuration"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="getReader" return="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Reader"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="opts" type="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Reader.Options"/>
+      <param name="blockPoolID" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getWriter" return="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Writer"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="opts" type="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Writer.Options"/>
+      <param name="blockPoolID" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="refresh"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="close"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <field name="LOG" type="org.slf4j.Logger"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <doc>
+    <![CDATA[A LevelDB based implementation of {@link BlockAliasMap}.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.LevelDBFileRegionAliasMap -->
+  <!-- start class org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap -->
+  <class name="TextFileRegionAliasMap" extends="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.conf.Configurable"/>
+    <constructor name="TextFileRegionAliasMap"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="setConf"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+    </method>
+    <method name="getConf" return="org.apache.hadoop.conf.Configuration"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="getReader" return="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Reader"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="opts" type="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Reader.Options"/>
+      <param name="blockPoolID" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getWriter" return="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Writer"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="opts" type="org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap.Writer.Options"/>
+      <param name="blockPoolID" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="refresh"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="close"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="blockPoolIDFromFileName" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="file" type="org.apache.hadoop.fs.Path"/>
+    </method>
+    <method name="fileNameFromBlockPoolID" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="blockPoolID" type="java.lang.String"/>
+    </method>
+    <field name="LOG" type="org.slf4j.Logger"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <doc>
+    <![CDATA[This class is used for block maps stored as text files,
+ with a specified delimiter.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap -->
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.fsdataset">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.fsdataset.impl">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.metrics">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.web">
+</package>
+<package name="org.apache.hadoop.hdfs.server.datanode.web.webhdfs">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.command">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.connectors">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.datamodel">
+</package>
+<package name="org.apache.hadoop.hdfs.server.diskbalancer.planner">
+</package>
+<package name="org.apache.hadoop.hdfs.server.mover">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode">
+  <!-- start interface org.apache.hadoop.hdfs.server.namenode.AuditLogger -->
+  <interface name="AuditLogger"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="initialize"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <doc>
+      <![CDATA[Called during initialization of the logger.
+
+ @param conf The configuration object.]]>
+      </doc>
+    </method>
+    <method name="logAuditEvent"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="stat" type="org.apache.hadoop.fs.FileStatus"/>
+      <doc>
+      <![CDATA[Called to log an audit event.
+ <p>
+ This method must return as quickly as possible, since it's called
+ in a critical section of the NameNode's operation.
+
+ @param succeeded Whether authorization succeeded.
+ @param userName Name of the user executing the request.
+ @param addr Remote address of the request.
+ @param cmd The requested command.
+ @param src Path of affected source file.
+ @param dst Path of affected destination file (if any).
+ @param stat File information for operations that change the file's
+             metadata (permissions, owner, times, etc).]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Interface defining an audit logger.]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.hdfs.server.namenode.AuditLogger -->
+  <!-- start class org.apache.hadoop.hdfs.server.namenode.HdfsAuditLogger -->
+  <class name="HdfsAuditLogger" extends="java.lang.Object"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.hdfs.server.namenode.AuditLogger"/>
+    <constructor name="HdfsAuditLogger"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="logAuditEvent"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="status" type="org.apache.hadoop.fs.FileStatus"/>
+    </method>
+    <method name="logAuditEvent"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="stat" type="org.apache.hadoop.fs.FileStatus"/>
+      <param name="callerContext" type="org.apache.hadoop.ipc.CallerContext"/>
+      <param name="ugi" type="org.apache.hadoop.security.UserGroupInformation"/>
+      <param name="dtSecretManager" type="org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager"/>
+      <doc>
+      <![CDATA[Same as
+ {@link #logAuditEvent(boolean, String, InetAddress, String, String, String,
+ FileStatus)} with additional parameters related to logging delegation token
+ tracking IDs.
+ 
+ @param succeeded Whether authorization succeeded.
+ @param userName Name of the user executing the request.
+ @param addr Remote address of the request.
+ @param cmd The requested command.
+ @param src Path of affected source file.
+ @param dst Path of affected destination file (if any).
+ @param stat File information for operations that change the file's metadata
+          (permissions, owner, times, etc).
+ @param callerContext Context information of the caller
+ @param ugi UserGroupInformation of the current user, or null if not logging
+          token tracking information
+ @param dtSecretManager The token secret manager, or null if not logging
+          token tracking information]]>
+      </doc>
+    </method>
+    <method name="logAuditEvent"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="succeeded" type="boolean"/>
+      <param name="userName" type="java.lang.String"/>
+      <param name="addr" type="java.net.InetAddress"/>
+      <param name="cmd" type="java.lang.String"/>
+      <param name="src" type="java.lang.String"/>
+      <param name="dst" type="java.lang.String"/>
+      <param name="stat" type="org.apache.hadoop.fs.FileStatus"/>
+      <param name="ugi" type="org.apache.hadoop.security.UserGroupInformation"/>
+      <param name="dtSecretManager" type="org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager"/>
+      <doc>
+      <![CDATA[Same as
+ {@link #logAuditEvent(boolean, String, InetAddress, String, String,
+ String, FileStatus, CallerContext, UserGroupInformation,
+ DelegationTokenSecretManager)} without {@link CallerContext} information.]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Extension of {@link AuditLogger}.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.namenode.HdfsAuditLogger -->
+  <!-- start class org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider -->
+  <class name="INodeAttributeProvider" extends="java.lang.Object"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="INodeAttributeProvider"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="start"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Initialize the provider. This method is called at NameNode startup
+ time.]]>
+      </doc>
+    </method>
+    <method name="stop"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Shutdown the provider. This method is called at NameNode shutdown time.]]>
+      </doc>
+    </method>
+    <method name="getAttributes" return="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="fullPath" type="java.lang.String"/>
+      <param name="inode" type="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"/>
+    </method>
+    <method name="getAttributes" return="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="pathElements" type="java.lang.String[]"/>
+      <param name="inode" type="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"/>
+    </method>
+    <method name="getAttributes" return="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="components" type="byte[][]"/>
+      <param name="inode" type="org.apache.hadoop.hdfs.server.namenode.INodeAttributes"/>
+    </method>
+    <method name="getExternalAccessControlEnforcer" return="org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="defaultEnforcer" type="org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider.AccessControlEnforcer"/>
+      <doc>
+      <![CDATA[Can be over-ridden by implementations to provide a custom Access Control
+ Enforcer that can provide an alternate implementation of the
+ default permission checking logic.
+ @param defaultEnforcer The Default AccessControlEnforcer
+ @return The AccessControlEnforcer to use]]>
+      </doc>
+    </method>
+  </class>
+  <!-- end class org.apache.hadoop.hdfs.server.namenode.INodeAttributeProvider -->
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.ha">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.metrics">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.snapshot">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.top">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.top.metrics">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.top.window">
+</package>
+<package name="org.apache.hadoop.hdfs.server.namenode.web.resources">
+</package>
+<package name="org.apache.hadoop.hdfs.tools">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.offlineEditsViewer">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.offlineImageViewer">
+</package>
+<package name="org.apache.hadoop.hdfs.tools.snapshot">
+</package>
+<package name="org.apache.hadoop.hdfs.util">
+</package>
+<package name="org.apache.hadoop.hdfs.web">
+</package>
+<package name="org.apache.hadoop.hdfs.web.resources">
+</package>
+
+</api>
diff --git a/hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Common_3.2.4.xml b/hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Common_3.2.4.xml
new file mode 100644
index 00000000000..ff74bb7afb2
--- /dev/null
+++ b/hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Common_3.2.4.xml
@@ -0,0 +1,113 @@
+<?xml version="1.0" encoding="iso-8859-1" standalone="no"?>
+<!-- Generated by the JDiff Javadoc doclet -->
+<!-- (http://www.jdiff.org) -->
+<!-- on Tue Jul 12 12:57:31 GMT 2022 -->
+
+<api
+  xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
+  xsi:noNamespaceSchemaLocation='api.xsd'
+  name="Apache Hadoop MapReduce Common 3.2.4"
+  jdversion="1.0.9">
+
+<!--  Command line arguments =  -doclet org.apache.hadoop.classification.tools.IncludePublicAnnotationsJDiffDoclet -docletpath /build/source/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/target/hadoop-annotations.jar:/build/source/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/target/jdiff.jar -verbose -classpath /build/source/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/target/classes:/ [...]
+<package name="org.apache.hadoop.mapred">
+</package>
+<package name="org.apache.hadoop.mapreduce">
+</package>
+<package name="org.apache.hadoop.mapreduce.v2.api.protocolrecords">
+  <!-- start interface org.apache.hadoop.mapreduce.v2.api.protocolrecords.CancelDelegationTokenRequest -->
+  <interface name="CancelDelegationTokenRequest"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="getDelegationToken" return="org.apache.hadoop.yarn.api.records.Token"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="setDelegationToken"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="dToken" type="org.apache.hadoop.yarn.api.records.Token"/>
+    </method>
+    <doc>
+    <![CDATA[The request issued by the client to the {@code ResourceManager} to cancel a
+ delegation token.]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapreduce.v2.api.protocolrecords.CancelDelegationTokenRequest -->
+  <!-- start interface org.apache.hadoop.mapreduce.v2.api.protocolrecords.CancelDelegationTokenResponse -->
+  <interface name="CancelDelegationTokenResponse"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <doc>
+    <![CDATA[The response from the {@code ResourceManager} to a cancelDelegationToken
+ request.]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapreduce.v2.api.protocolrecords.CancelDelegationTokenResponse -->
+  <!-- start interface org.apache.hadoop.mapreduce.v2.api.protocolrecords.GetDelegationTokenRequest -->
+  <interface name="GetDelegationTokenRequest"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="getRenewer" return="java.lang.String"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="setRenewer"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="renewer" type="java.lang.String"/>
+    </method>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapreduce.v2.api.protocolrecords.GetDelegationTokenRequest -->
+  <!-- start interface org.apache.hadoop.mapreduce.v2.api.protocolrecords.RenewDelegationTokenRequest -->
+  <interface name="RenewDelegationTokenRequest"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="getDelegationToken" return="org.apache.hadoop.yarn.api.records.Token"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="setDelegationToken"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="dToken" type="org.apache.hadoop.yarn.api.records.Token"/>
+    </method>
+    <doc>
+    <![CDATA[The request issued by the client to renew a delegation token from
+ the {@code ResourceManager}.]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapreduce.v2.api.protocolrecords.RenewDelegationTokenRequest -->
+  <!-- start interface org.apache.hadoop.mapreduce.v2.api.protocolrecords.RenewDelegationTokenResponse -->
+  <interface name="RenewDelegationTokenResponse"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="getNextExpirationTime" return="long"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="setNextExpirationTime"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="expTime" type="long"/>
+    </method>
+    <doc>
+    <![CDATA[The response to a renewDelegationToken call to the {@code ResourceManager}.]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapreduce.v2.api.protocolrecords.RenewDelegationTokenResponse -->
+</package>
+<package name="org.apache.hadoop.mapreduce.v2.security">
+</package>
+<package name="org.apache.hadoop.yarn.proto">
+</package>
+
+</api>
diff --git a/hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Core_3.2.4.xml b/hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Core_3.2.4.xml
new file mode 100644
index 00000000000..239b01b4612
--- /dev/null
+++ b/hadoop-mapreduce-project/dev-support/jdiff/Apache_Hadoop_MapReduce_Core_3.2.4.xml
@@ -0,0 +1,28073 @@
+<?xml version="1.0" encoding="iso-8859-1" standalone="no"?>
+<!-- Generated by the JDiff Javadoc doclet -->
+<!-- (http://www.jdiff.org) -->
+<!-- on Tue Jul 12 12:57:12 GMT 2022 -->
+
+<api
+  xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
+  xsi:noNamespaceSchemaLocation='api.xsd'
+  name="Apache Hadoop MapReduce Core 3.2.4"
+  jdversion="1.0.9">
+
+<!--  Command line arguments =  -doclet org.apache.hadoop.classification.tools.IncludePublicAnnotationsJDiffDoclet -docletpath /build/source/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/target/hadoop-annotations.jar:/build/source/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/target/jdiff.jar -verbose -classpath /build/source/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/target/classes:/build/ [...]
+<package name="org.apache.hadoop.filecache">
+  <!-- start class org.apache.hadoop.filecache.DistributedCache -->
+  <class name="DistributedCache" extends="org.apache.hadoop.mapreduce.filecache.DistributedCache"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="DistributedCache"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="addLocalArchives"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="str" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Add a archive that has been localized to the conf.  Used
+ by internal DistributedCache code.
+ @param conf The conf to modify to contain the localized caches
+ @param str a comma separated list of local archives]]>
+      </doc>
+    </method>
+    <method name="addLocalFiles"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="str" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Add a file that has been localized to the conf..  Used
+ by internal DistributedCache code.
+ @param conf The conf to modify to contain the localized caches
+ @param str a comma separated list of local files]]>
+      </doc>
+    </method>
+    <method name="createAllSymlink"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="Internal to MapReduce framework.  Use DistributedCacheManager
+ instead.">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="jobCacheDir" type="java.io.File"/>
+      <param name="workDir" type="java.io.File"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[This method create symlinks for all files in a given dir in another
+ directory. Currently symlinks cannot be disabled. This is a NO-OP.
+
+ @param conf the configuration
+ @param jobCacheDir the target directory for creating symlinks
+ @param workDir the directory in which the symlinks are created
+ @throws IOException
+ @deprecated Internal to MapReduce framework.  Use DistributedCacheManager
+ instead.]]>
+      </doc>
+    </method>
+    <method name="getFileStatus" return="org.apache.hadoop.fs.FileStatus"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="cache" type="java.net.URI"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Returns {@link FileStatus} of a given cache file on hdfs. Internal to
+ MapReduce.
+ @param conf configuration
+ @param cache cache file
+ @return <code>FileStatus</code> of a given cache file on hdfs
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getTimestamp" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="cache" type="java.net.URI"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Returns mtime of a given cache file on hdfs. Internal to MapReduce.
+ @param conf configuration
+ @param cache cache file
+ @return mtime of a given cache file on hdfs
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="setArchiveTimestamps"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="timestamps" type="java.lang.String"/>
+      <doc>
+      <![CDATA[This is to check the timestamp of the archives to be localized.
+ Used by internal MapReduce code.
+ @param conf Configuration which stores the timestamp's
+ @param timestamps comma separated list of timestamps of archives.
+ The order should be the same as the order in which the archives are added.]]>
+      </doc>
+    </method>
+    <method name="setFileTimestamps"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="timestamps" type="java.lang.String"/>
+      <doc>
+      <![CDATA[This is to check the timestamp of the files to be localized.
+ Used by internal MapReduce code.
+ @param conf Configuration which stores the timestamp's
+ @param timestamps comma separated list of timestamps of files.
+ The order should be the same as the order in which the files are added.]]>
+      </doc>
+    </method>
+    <method name="setLocalArchives"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="str" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the conf to contain the location for localized archives.  Used
+ by internal DistributedCache code.
+ @param conf The conf to modify to contain the localized caches
+ @param str a comma separated list of local archives]]>
+      </doc>
+    </method>
+    <method name="setLocalFiles"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="str" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the conf to contain the location for localized files.  Used
+ by internal DistributedCache code.
+ @param conf The conf to modify to contain the localized caches
+ @param str a comma separated list of local files]]>
+      </doc>
+    </method>
+    <field name="CACHE_FILES_SIZES" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Warning: {@link #CACHE_FILES_SIZES} is not a *public* constant.
+ The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ use {@link MRJobConfig#CACHE_FILES_SIZES}]]>
+      </doc>
+    </field>
+    <field name="CACHE_ARCHIVES_SIZES" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Warning: {@link #CACHE_ARCHIVES_SIZES} is not a *public* constant.
+ The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ use {@link MRJobConfig#CACHE_ARCHIVES_SIZES}]]>
+      </doc>
+    </field>
+    <field name="CACHE_ARCHIVES_TIMESTAMPS" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Warning: {@link #CACHE_ARCHIVES_TIMESTAMPS} is not a *public* constant.
+ The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ use {@link MRJobConfig#CACHE_ARCHIVES_TIMESTAMPS}]]>
+      </doc>
+    </field>
+    <field name="CACHE_FILES_TIMESTAMPS" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Warning: {@link #CACHE_FILES_TIMESTAMPS} is not a *public* constant.
+ The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ use {@link MRJobConfig#CACHE_FILE_TIMESTAMPS}]]>
+      </doc>
+    </field>
+    <field name="CACHE_ARCHIVES" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Warning: {@link #CACHE_ARCHIVES} is not a *public* constant.
+ The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ use {@link MRJobConfig#CACHE_ARCHIVES}]]>
+      </doc>
+    </field>
+    <field name="CACHE_FILES" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Warning: {@link #CACHE_FILES} is not a *public* constant.
+ The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ use {@link MRJobConfig#CACHE_FILES}]]>
+      </doc>
+    </field>
+    <field name="CACHE_LOCALARCHIVES" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Warning: {@link #CACHE_LOCALARCHIVES} is not a *public* constant.
+ The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ use {@link MRJobConfig#CACHE_LOCALARCHIVES}]]>
+      </doc>
+    </field>
+    <field name="CACHE_LOCALFILES" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Warning: {@link #CACHE_LOCALFILES} is not a *public* constant.
+ The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ use {@link MRJobConfig#CACHE_LOCALFILES}]]>
+      </doc>
+    </field>
+    <field name="CACHE_SYMLINK" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Warning: {@link #CACHE_SYMLINK} is not a *public* constant.
+ The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ use {@link MRJobConfig#CACHE_SYMLINK}]]>
+      </doc>
+    </field>
+    <doc>
+    <![CDATA[Distribute application-specific large, read-only files efficiently.
+
+ <p><code>DistributedCache</code> is a facility provided by the Map-Reduce
+ framework to cache files (text, archives, jars etc.) needed by applications.
+ </p>
+
+ <p>Applications specify the files, via urls (hdfs:// or http://) to be cached
+ via the {@link org.apache.hadoop.mapred.JobConf}. The
+ <code>DistributedCache</code> assumes that the files specified via urls are
+ already present on the {@link FileSystem} at the path specified by the url
+ and are accessible by every machine in the cluster.</p>
+
+ <p>The framework will copy the necessary files on to the worker node before
+ any tasks for the job are executed on that node. Its efficiency stems from
+ the fact that the files are only copied once per job and the ability to
+ cache archives which are un-archived on the workers.</p>
+
+ <p><code>DistributedCache</code> can be used to distribute simple, read-only
+ data/text files and/or more complex types such as archives, jars etc.
+ Archives (zip, tar and tgz/tar.gz files) are un-archived at the worker nodes.
+ Jars may be optionally added to the classpath of the tasks, a rudimentary
+ software distribution mechanism.  Files have execution permissions.
+ In older version of Hadoop Map/Reduce users could optionally ask for symlinks
+ to be created in the working directory of the child task.  In the current
+ version symlinks are always created.  If the URL does not have a fragment
+ the name of the file or directory will be used. If multiple files or
+ directories map to the same link name, the last one added, will be used.  All
+ others will not even be downloaded.</p>
+
+ <p><code>DistributedCache</code> tracks modification timestamps of the cache
+ files. Clearly the cache files should not be modified by the application
+ or externally while the job is executing.</p>
+
+ <p>Here is an illustrative example on how to use the
+ <code>DistributedCache</code>:</p>
+ <p><blockquote><pre>
+     // Setting up the cache for the application
+
+     1. Copy the requisite files to the <code>FileSystem</code>:
+
+     $ bin/hadoop fs -copyFromLocal lookup.dat /myapp/lookup.dat
+     $ bin/hadoop fs -copyFromLocal map.zip /myapp/map.zip
+     $ bin/hadoop fs -copyFromLocal mylib.jar /myapp/mylib.jar
+     $ bin/hadoop fs -copyFromLocal mytar.tar /myapp/mytar.tar
+     $ bin/hadoop fs -copyFromLocal mytgz.tgz /myapp/mytgz.tgz
+     $ bin/hadoop fs -copyFromLocal mytargz.tar.gz /myapp/mytargz.tar.gz
+
+     2. Setup the application's <code>JobConf</code>:
+
+     JobConf job = new JobConf();
+     DistributedCache.addCacheFile(new URI("/myapp/lookup.dat#lookup.dat"),
+                                   job);
+     DistributedCache.addCacheArchive(new URI("/myapp/map.zip"), job);
+     DistributedCache.addFileToClassPath(new Path("/myapp/mylib.jar"), job);
+     DistributedCache.addCacheArchive(new URI("/myapp/mytar.tar"), job);
+     DistributedCache.addCacheArchive(new URI("/myapp/mytgz.tgz"), job);
+     DistributedCache.addCacheArchive(new URI("/myapp/mytargz.tar.gz"), job);
+
+     3. Use the cached files in the {@link org.apache.hadoop.mapred.Mapper}
+     or {@link org.apache.hadoop.mapred.Reducer}:
+
+     public static class MapClass extends MapReduceBase
+     implements Mapper&lt;K, V, K, V&gt; {
+
+       private Path[] localArchives;
+       private Path[] localFiles;
+
+       public void configure(JobConf job) {
+         // Get the cached archives/files
+         File f = new File("./map.zip/some/file/in/zip.txt");
+       }
+
+       public void map(K key, V value,
+                       OutputCollector&lt;K, V&gt; output, Reporter reporter)
+       throws IOException {
+         // Use data from the cached archives/files here
+         // ...
+         // ...
+         output.collect(k, v);
+       }
+     }
+
+ </pre></blockquote>
+
+ It is also very common to use the DistributedCache by using
+ {@link org.apache.hadoop.util.GenericOptionsParser}.
+
+ This class includes methods that should be used by users
+ (specifically those mentioned in the example above, as well
+ as {@link DistributedCache#addArchiveToClassPath(Path, Configuration)}),
+ as well as methods intended for use by the MapReduce framework
+ (e.g., {@link org.apache.hadoop.mapred.JobClient}).
+
+ @see org.apache.hadoop.mapred.JobConf
+ @see org.apache.hadoop.mapred.JobClient
+ @see org.apache.hadoop.mapreduce.Job]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.filecache.DistributedCache -->
+</package>
+<package name="org.apache.hadoop.mapred">
+  <!-- start class org.apache.hadoop.mapred.ClusterStatus -->
+  <class name="ClusterStatus" extends="java.lang.Object"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.io.Writable"/>
+    <method name="getTaskTrackers" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the number of task trackers in the cluster.
+ 
+ @return the number of task trackers in the cluster.]]>
+      </doc>
+    </method>
+    <method name="getActiveTrackerNames" return="java.util.Collection"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the names of task trackers in the cluster.
+ 
+ @return the active task trackers in the cluster.]]>
+      </doc>
+    </method>
+    <method name="getBlacklistedTrackerNames" return="java.util.Collection"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the names of task trackers in the cluster.
+ 
+ @return the blacklisted task trackers in the cluster.]]>
+      </doc>
+    </method>
+    <method name="getGraylistedTrackerNames" return="java.util.Collection"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the names of graylisted task trackers in the cluster.
+
+ The gray list of trackers is no longer available on M/R 2.x. The function
+ is kept to be compatible with M/R 1.x applications.
+
+ @return an empty graylisted task trackers in the cluster.]]>
+      </doc>
+    </method>
+    <method name="getGraylistedTrackers" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the number of graylisted task trackers in the cluster.
+
+ The gray list of trackers is no longer available on M/R 2.x. The function
+ is kept to be compatible with M/R 1.x applications.
+
+ @return 0 graylisted task trackers in the cluster.]]>
+      </doc>
+    </method>
+    <method name="getBlacklistedTrackers" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the number of blacklisted task trackers in the cluster.
+ 
+ @return the number of blacklisted task trackers in the cluster.]]>
+      </doc>
+    </method>
+    <method name="getNumExcludedNodes" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the number of excluded hosts in the cluster.
+ @return the number of excluded hosts in the cluster.]]>
+      </doc>
+    </method>
+    <method name="getTTExpiryInterval" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the tasktracker expiry interval for the cluster
+ @return the expiry interval in msec]]>
+      </doc>
+    </method>
+    <method name="getMapTasks" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the number of currently running map tasks in the cluster.
+ 
+ @return the number of currently running map tasks in the cluster.]]>
+      </doc>
+    </method>
+    <method name="getReduceTasks" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the number of currently running reduce tasks in the cluster.
+ 
+ @return the number of currently running reduce tasks in the cluster.]]>
+      </doc>
+    </method>
+    <method name="getMaxMapTasks" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the maximum capacity for running map tasks in the cluster.
+ 
+ @return the maximum capacity for running map tasks in the cluster.]]>
+      </doc>
+    </method>
+    <method name="getMaxReduceTasks" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the maximum capacity for running reduce tasks in the cluster.
+ 
+ @return the maximum capacity for running reduce tasks in the cluster.]]>
+      </doc>
+    </method>
+    <method name="getJobTrackerStatus" return="org.apache.hadoop.mapreduce.Cluster.JobTrackerStatus"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the JobTracker's status.
+ 
+ @return {@link JobTrackerStatus} of the JobTracker]]>
+      </doc>
+    </method>
+    <method name="getMaxMemory" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Returns UNINITIALIZED_MEMORY_VALUE (-1)]]>
+      </doc>
+    </method>
+    <method name="getUsedMemory" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Returns UNINITIALIZED_MEMORY_VALUE (-1)]]>
+      </doc>
+    </method>
+    <method name="getBlackListedTrackersInfo" return="java.util.Collection"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Gets the list of blacklisted trackers along with reasons for blacklisting.
+ 
+ @return the collection of {@link BlackListInfo} objects.]]>
+      </doc>
+    </method>
+    <method name="getJobTrackerState" return="org.apache.hadoop.mapred.JobTracker.State"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the current state of the <code>JobTracker</code>,
+ as {@link JobTracker.State}
+
+ {@link JobTracker.State} should no longer be used on M/R 2.x. The function
+ is kept to be compatible with M/R 1.x applications.
+
+ @return the invalid state of the <code>JobTracker</code>.]]>
+      </doc>
+    </method>
+    <method name="write"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="out" type="java.io.DataOutput"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="readFields"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="in" type="java.io.DataInput"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <field name="UNINITIALIZED_MEMORY_VALUE" type="long"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <doc>
+    <![CDATA[Status information on the current state of the Map-Reduce cluster.
+ 
+ <p><code>ClusterStatus</code> provides clients with information such as:
+ <ol>
+   <li>
+   Size of the cluster. 
+   </li>
+   <li>
+   Name of the trackers. 
+   </li>
+   <li>
+   Task capacity of the cluster. 
+   </li>
+   <li>
+   The number of currently running map and reduce tasks.
+   </li>
+   <li>
+   State of the <code>JobTracker</code>.
+   </li>
+   <li>
+   Details regarding black listed trackers.
+   </li>
+ </ol>
+ 
+ <p>Clients can query for the latest <code>ClusterStatus</code>, via 
+ {@link JobClient#getClusterStatus()}.</p>
+ 
+ @see JobClient]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.ClusterStatus -->
+  <!-- start class org.apache.hadoop.mapred.Counters -->
+  <class name="Counters" extends="org.apache.hadoop.mapreduce.counters.AbstractCounters"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="Counters"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <constructor name="Counters" type="org.apache.hadoop.mapreduce.Counters"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getGroup" return="org.apache.hadoop.mapred.Counters.Group"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="groupName" type="java.lang.String"/>
+    </method>
+    <method name="getGroupNames" return="java.util.Collection"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="makeCompactString" return="java.lang.String"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="findCounter" return="org.apache.hadoop.mapred.Counters.Counter"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="group" type="java.lang.String"/>
+      <param name="name" type="java.lang.String"/>
+    </method>
+    <method name="findCounter" return="org.apache.hadoop.mapred.Counters.Counter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="use {@link #findCounter(String, String)} instead">
+      <param name="group" type="java.lang.String"/>
+      <param name="id" type="int"/>
+      <param name="name" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Find a counter by using strings
+ @param group the name of the group
+ @param id the id of the counter within the group (0 to N-1)
+ @param name the internal name of the counter
+ @return the counter for that name
+ @deprecated use {@link #findCounter(String, String)} instead]]>
+      </doc>
+    </method>
+    <method name="incrCounter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="key" type="java.lang.Enum"/>
+      <param name="amount" type="long"/>
+      <doc>
+      <![CDATA[Increments the specified counter by the specified amount, creating it if
+ it didn't already exist.
+ @param key identifies a counter
+ @param amount amount by which counter is to be incremented]]>
+      </doc>
+    </method>
+    <method name="incrCounter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="group" type="java.lang.String"/>
+      <param name="counter" type="java.lang.String"/>
+      <param name="amount" type="long"/>
+      <doc>
+      <![CDATA[Increments the specified counter by the specified amount, creating it if
+ it didn't already exist.
+ @param group the name of the group
+ @param counter the internal name of the counter
+ @param amount amount by which counter is to be incremented]]>
+      </doc>
+    </method>
+    <method name="getCounter" return="long"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="key" type="java.lang.Enum"/>
+      <doc>
+      <![CDATA[Returns current value of the specified counter, or 0 if the counter
+ does not exist.
+ @param key the counter enum to lookup
+ @return the counter value or 0 if counter not found]]>
+      </doc>
+    </method>
+    <method name="incrAllCounters"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="other" type="org.apache.hadoop.mapred.Counters"/>
+      <doc>
+      <![CDATA[Increments multiple counters by their amounts in another Counters
+ instance.
+ @param other the other Counters instance]]>
+      </doc>
+    </method>
+    <method name="size" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="use {@link #countCounters()} instead">
+      <doc>
+      <![CDATA[@return the total number of counters
+ @deprecated use {@link #countCounters()} instead]]>
+      </doc>
+    </method>
+    <method name="sum" return="org.apache.hadoop.mapred.Counters"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="a" type="org.apache.hadoop.mapred.Counters"/>
+      <param name="b" type="org.apache.hadoop.mapred.Counters"/>
+      <doc>
+      <![CDATA[Convenience method for computing the sum of two sets of counters.
+ @param a the first counters
+ @param b the second counters
+ @return a new summed counters object]]>
+      </doc>
+    </method>
+    <method name="log"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="log" type="org.slf4j.Logger"/>
+      <doc>
+      <![CDATA[Logs the current counter values.
+ @param log The log to use.]]>
+      </doc>
+    </method>
+    <method name="makeEscapedCompactString" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Represent the counter in a textual format that can be converted back to
+ its object form
+ @return the string in the following format
+ {(groupName)(group-displayName)[(counterName)(displayName)(value)][]*}*]]>
+      </doc>
+    </method>
+    <method name="fromEscapedCompactString" return="org.apache.hadoop.mapred.Counters"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="compactString" type="java.lang.String"/>
+      <exception name="ParseException" type="java.text.ParseException"/>
+      <doc>
+      <![CDATA[Convert a stringified (by {@link #makeEscapedCompactString()} counter
+ representation into a counter object.
+ @param compactString to parse
+ @return a new counters object
+ @throws ParseException]]>
+      </doc>
+    </method>
+    <field name="MAX_COUNTER_LIMIT" type="int"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="MAX_GROUP_LIMIT" type="int"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <doc>
+    <![CDATA[A set of named counters.
+
+ <p><code>Counters</code> represent global counters, defined either by the
+ Map-Reduce framework or applications. Each <code>Counter</code> can be of
+ any {@link Enum} type.</p>
+
+ <p><code>Counters</code> are bunched into {@link Group}s, each comprising of
+ counters from a particular <code>Enum</code> class.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.Counters -->
+  <!-- start class org.apache.hadoop.mapred.Counters.Counter -->
+  <class name="Counters.Counter" extends="java.lang.Object"
+    abstract="false"
+    static="true" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapreduce.Counter"/>
+    <constructor name="Counter"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="setDisplayName"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="displayName" type="java.lang.String"/>
+    </method>
+    <method name="getName" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="getDisplayName" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="getValue" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="setValue"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="value" type="long"/>
+    </method>
+    <method name="increment"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="incr" type="long"/>
+    </method>
+    <method name="write"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="out" type="java.io.DataOutput"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="readFields"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="in" type="java.io.DataInput"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="makeEscapedCompactString" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Returns the compact stringified version of the counter in the format
+ [(actual-name)(display-name)(value)]
+ @return the stringified result]]>
+      </doc>
+    </method>
+    <method name="contentEquals" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="deprecated, no comment">
+      <param name="counter" type="org.apache.hadoop.mapred.Counters.Counter"/>
+      <doc>
+      <![CDATA[Checks for (content) equality of two (basic) counters
+ @param counter to compare
+ @return true if content equals
+ @deprecated]]>
+      </doc>
+    </method>
+    <method name="getCounter" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[@return the value of the counter]]>
+      </doc>
+    </method>
+    <method name="getUnderlyingCounter" return="org.apache.hadoop.mapreduce.Counter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="equals" return="boolean"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="genericRight" type="java.lang.Object"/>
+    </method>
+    <method name="hashCode" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <doc>
+    <![CDATA[A counter record, comprising its name and value.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.Counters.Counter -->
+  <!-- start class org.apache.hadoop.mapred.Counters.Group -->
+  <class name="Counters.Group" extends="java.lang.Object"
+    abstract="false"
+    static="true" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapreduce.counters.CounterGroupBase"/>
+    <constructor name="Group"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getCounter" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="counterName" type="java.lang.String"/>
+      <doc>
+      <![CDATA[@param counterName the name of the counter
+ @return the value of the specified counter, or 0 if the counter does
+ not exist.]]>
+      </doc>
+    </method>
+    <method name="makeEscapedCompactString" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[@return the compact stringified version of the group in the format
+ {(actual-name)(display-name)(value)[][][]} where [] are compact strings
+ for the counters within.]]>
+      </doc>
+    </method>
+    <method name="getCounter" return="org.apache.hadoop.mapred.Counters.Counter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="use {@link #findCounter(String)} instead">
+      <param name="id" type="int"/>
+      <param name="name" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Get the counter for the given id and create it if it doesn't exist.
+ @param id the numeric id of the counter within the group
+ @param name the internal counter name
+ @return the counter
+ @deprecated use {@link #findCounter(String)} instead]]>
+      </doc>
+    </method>
+    <method name="getCounterForName" return="org.apache.hadoop.mapred.Counters.Counter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="name" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Get the counter for the given name and create it if it doesn't exist.
+ @param name the internal counter name
+ @return the counter]]>
+      </doc>
+    </method>
+    <method name="write"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="out" type="java.io.DataOutput"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="readFields"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="in" type="java.io.DataInput"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="iterator" return="java.util.Iterator"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="getName" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="getDisplayName" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="setDisplayName"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="displayName" type="java.lang.String"/>
+    </method>
+    <method name="addCounter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="counter" type="org.apache.hadoop.mapred.Counters.Counter"/>
+    </method>
+    <method name="addCounter" return="org.apache.hadoop.mapred.Counters.Counter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="name" type="java.lang.String"/>
+      <param name="displayName" type="java.lang.String"/>
+      <param name="value" type="long"/>
+    </method>
+    <method name="findCounter" return="org.apache.hadoop.mapred.Counters.Counter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="counterName" type="java.lang.String"/>
+      <param name="displayName" type="java.lang.String"/>
+    </method>
+    <method name="findCounter" return="org.apache.hadoop.mapred.Counters.Counter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="counterName" type="java.lang.String"/>
+      <param name="create" type="boolean"/>
+    </method>
+    <method name="findCounter" return="org.apache.hadoop.mapred.Counters.Counter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="counterName" type="java.lang.String"/>
+    </method>
+    <method name="size" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="incrAllCounters"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="rightGroup" type="org.apache.hadoop.mapreduce.counters.CounterGroupBase"/>
+    </method>
+    <method name="getUnderlyingGroup" return="org.apache.hadoop.mapreduce.counters.CounterGroupBase"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="equals" return="boolean"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="genericRight" type="java.lang.Object"/>
+    </method>
+    <method name="hashCode" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <doc>
+    <![CDATA[<code>Group</code> of counters, comprising of counters from a particular
+  counter {@link Enum} class.
+
+  <p><code>Group</code>handles localization of the class name and the
+  counter names.</p>]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.Counters.Group -->
+  <!-- start class org.apache.hadoop.mapred.FileAlreadyExistsException -->
+  <class name="FileAlreadyExistsException" extends="java.io.IOException"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="FileAlreadyExistsException"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <constructor name="FileAlreadyExistsException" type="java.lang.String"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <doc>
+    <![CDATA[Used when target file already exists for any operation and 
+ is not configured to be overwritten.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.FileAlreadyExistsException -->
+  <!-- start class org.apache.hadoop.mapred.FileInputFormat -->
+  <class name="FileInputFormat" extends="java.lang.Object"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapred.InputFormat"/>
+    <constructor name="FileInputFormat"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="setMinSplitSize"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="minSplitSize" type="long"/>
+    </method>
+    <method name="isSplitable" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="fs" type="org.apache.hadoop.fs.FileSystem"/>
+      <param name="filename" type="org.apache.hadoop.fs.Path"/>
+      <doc>
+      <![CDATA[Is the given filename splittable? Usually, true, but if the file is
+ stream compressed, it will not be.
+
+ The default implementation in <code>FileInputFormat</code> always returns
+ true. Implementations that may deal with non-splittable files <i>must</i>
+ override this method.
+
+ <code>FileInputFormat</code> implementations can override this and return
+ <code>false</code> to ensure that individual input files are never split-up
+ so that {@link Mapper}s process entire files.
+ 
+ @param fs the file system that the file is on
+ @param filename the file name to check
+ @return is this file splitable?]]>
+      </doc>
+    </method>
+    <method name="getRecordReader" return="org.apache.hadoop.mapred.RecordReader"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="split" type="org.apache.hadoop.mapred.InputSplit"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="reporter" type="org.apache.hadoop.mapred.Reporter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="setInputPathFilter"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="filter" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set a PathFilter to be applied to the input paths for the map-reduce job.
+
+ @param filter the PathFilter class use for filtering the input paths.]]>
+      </doc>
+    </method>
+    <method name="getInputPathFilter" return="org.apache.hadoop.fs.PathFilter"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <doc>
+      <![CDATA[Get a PathFilter instance of the filter set for the input paths.
+
+ @return the PathFilter instance set for the job, NULL if none has been set.]]>
+      </doc>
+    </method>
+    <method name="addInputPathRecursively"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="result" type="java.util.List"/>
+      <param name="fs" type="org.apache.hadoop.fs.FileSystem"/>
+      <param name="path" type="org.apache.hadoop.fs.Path"/>
+      <param name="inputFilter" type="org.apache.hadoop.fs.PathFilter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Add files in the input path recursively into the results.
+ @param result
+          The List to store all files.
+ @param fs
+          The FileSystem.
+ @param path
+          The input path.
+ @param inputFilter
+          The input filter that can be used to filter files/dirs. 
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="listStatus" return="org.apache.hadoop.fs.FileStatus[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[List input directories.
+ Subclasses may override to, e.g., select only files matching a regular
+ expression. 
+ 
+ @param job the job to list input paths for
+ @return array of FileStatus objects
+ @throws IOException if zero items.]]>
+      </doc>
+    </method>
+    <method name="makeSplit" return="org.apache.hadoop.mapred.FileSplit"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="file" type="org.apache.hadoop.fs.Path"/>
+      <param name="start" type="long"/>
+      <param name="length" type="long"/>
+      <param name="hosts" type="java.lang.String[]"/>
+      <doc>
+      <![CDATA[A factory that makes the split for this class. It can be overridden
+ by sub-classes to make sub-types]]>
+      </doc>
+    </method>
+    <method name="makeSplit" return="org.apache.hadoop.mapred.FileSplit"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="file" type="org.apache.hadoop.fs.Path"/>
+      <param name="start" type="long"/>
+      <param name="length" type="long"/>
+      <param name="hosts" type="java.lang.String[]"/>
+      <param name="inMemoryHosts" type="java.lang.String[]"/>
+      <doc>
+      <![CDATA[A factory that makes the split for this class. It can be overridden
+ by sub-classes to make sub-types]]>
+      </doc>
+    </method>
+    <method name="getSplits" return="org.apache.hadoop.mapred.InputSplit[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="numSplits" type="int"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Splits files returned by {@link #listStatus(JobConf)} when
+ they're too big.]]>
+      </doc>
+    </method>
+    <method name="computeSplitSize" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="goalSize" type="long"/>
+      <param name="minSize" type="long"/>
+      <param name="blockSize" type="long"/>
+    </method>
+    <method name="getBlockIndex" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="blkLocations" type="org.apache.hadoop.fs.BlockLocation[]"/>
+      <param name="offset" type="long"/>
+    </method>
+    <method name="setInputPaths"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="commaSeparatedPaths" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Sets the given comma separated paths as the list of inputs 
+ for the map-reduce job.
+ 
+ @param conf Configuration of the job
+ @param commaSeparatedPaths Comma separated paths to be set as 
+        the list of inputs for the map-reduce job.]]>
+      </doc>
+    </method>
+    <method name="addInputPaths"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="commaSeparatedPaths" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Add the given comma separated paths to the list of inputs for
+  the map-reduce job.
+ 
+ @param conf The configuration of the job 
+ @param commaSeparatedPaths Comma separated paths to be added to
+        the list of inputs for the map-reduce job.]]>
+      </doc>
+    </method>
+    <method name="setInputPaths"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="inputPaths" type="org.apache.hadoop.fs.Path[]"/>
+      <doc>
+      <![CDATA[Set the array of {@link Path}s as the list of inputs
+ for the map-reduce job.
+ 
+ @param conf Configuration of the job. 
+ @param inputPaths the {@link Path}s of the input directories/files 
+ for the map-reduce job.]]>
+      </doc>
+    </method>
+    <method name="addInputPath"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="path" type="org.apache.hadoop.fs.Path"/>
+      <doc>
+      <![CDATA[Add a {@link Path} to the list of inputs for the map-reduce job.
+ 
+ @param conf The configuration of the job 
+ @param path {@link Path} to be added to the list of inputs for 
+            the map-reduce job.]]>
+      </doc>
+    </method>
+    <method name="getInputPaths" return="org.apache.hadoop.fs.Path[]"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <doc>
+      <![CDATA[Get the list of input {@link Path}s for the map-reduce job.
+ 
+ @param conf The configuration of the job 
+ @return the list of input {@link Path}s for the map-reduce job.]]>
+      </doc>
+    </method>
+    <method name="getSplitHosts" return="java.lang.String[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="blkLocations" type="org.apache.hadoop.fs.BlockLocation[]"/>
+      <param name="offset" type="long"/>
+      <param name="splitSize" type="long"/>
+      <param name="clusterMap" type="org.apache.hadoop.net.NetworkTopology"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[This function identifies and returns the hosts that contribute 
+ most for a given split. For calculating the contribution, rack
+ locality is treated on par with host locality, so hosts from racks
+ that contribute the most are preferred over hosts on racks that 
+ contribute less
+ @param blkLocations The list of block locations
+ @param offset 
+ @param splitSize 
+ @return an array of hosts that contribute most to this split
+ @throws IOException]]>
+      </doc>
+    </method>
+    <field name="LOG" type="org.slf4j.Logger"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="NUM_INPUT_FILES" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="INPUT_DIR_RECURSIVE" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="INPUT_DIR_NONRECURSIVE_IGNORE_SUBDIRS" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <doc>
+    <![CDATA[A base class for file-based {@link InputFormat}.
+ 
+ <p><code>FileInputFormat</code> is the base class for all file-based 
+ <code>InputFormat</code>s. This provides a generic implementation of
+ {@link #getSplits(JobConf, int)}.
+
+ Implementations of <code>FileInputFormat</code> can also override the
+ {@link #isSplitable(FileSystem, Path)} method to prevent input files
+ from being split-up in certain situations. Implementations that may
+ deal with non-splittable files <i>must</i> override this method, since
+ the default implementation assumes splitting is always possible.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.FileInputFormat -->
+  <!-- start class org.apache.hadoop.mapred.FileOutputCommitter -->
+  <class name="FileOutputCommitter" extends="org.apache.hadoop.mapred.OutputCommitter"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="FileOutputCommitter"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getWorkPath" return="org.apache.hadoop.fs.Path"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="context" type="org.apache.hadoop.mapred.TaskAttemptContext"/>
+      <param name="outputPath" type="org.apache.hadoop.fs.Path"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="setupJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="context" type="org.apache.hadoop.mapred.JobContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="commitJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="context" type="org.apache.hadoop.mapred.JobContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="cleanupJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="context" type="org.apache.hadoop.mapred.JobContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="abortJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="context" type="org.apache.hadoop.mapred.JobContext"/>
+      <param name="runState" type="int"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="setupTask"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="context" type="org.apache.hadoop.mapred.TaskAttemptContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="commitTask"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="context" type="org.apache.hadoop.mapred.TaskAttemptContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="abortTask"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="context" type="org.apache.hadoop.mapred.TaskAttemptContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="needsTaskCommit" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="context" type="org.apache.hadoop.mapred.TaskAttemptContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="isRecoverySupported" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="isCommitJobRepeatable" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="context" type="org.apache.hadoop.mapred.JobContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="isRecoverySupported" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="context" type="org.apache.hadoop.mapred.JobContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="recoverTask"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="context" type="org.apache.hadoop.mapred.TaskAttemptContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <field name="LOG" type="org.slf4j.Logger"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="TEMP_DIR_NAME" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Temporary directory name]]>
+      </doc>
+    </field>
+    <field name="SUCCEEDED_FILE_NAME" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <doc>
+    <![CDATA[An {@link OutputCommitter} that commits files specified 
+ in job output directory i.e. ${mapreduce.output.fileoutputformat.outputdir}.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.FileOutputCommitter -->
+  <!-- start class org.apache.hadoop.mapred.FileOutputFormat -->
+  <class name="FileOutputFormat" extends="java.lang.Object"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapred.OutputFormat"/>
+    <constructor name="FileOutputFormat"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="setCompressOutput"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="compress" type="boolean"/>
+      <doc>
+      <![CDATA[Set whether the output of the job is compressed.
+ @param conf the {@link JobConf} to modify
+ @param compress should the output of the job be compressed?]]>
+      </doc>
+    </method>
+    <method name="getCompressOutput" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <doc>
+      <![CDATA[Is the job output compressed?
+ @param conf the {@link JobConf} to look in
+ @return <code>true</code> if the job output should be compressed,
+         <code>false</code> otherwise]]>
+      </doc>
+    </method>
+    <method name="setOutputCompressorClass"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="codecClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the {@link CompressionCodec} to be used to compress job outputs.
+ @param conf the {@link JobConf} to modify
+ @param codecClass the {@link CompressionCodec} to be used to
+                   compress the job outputs]]>
+      </doc>
+    </method>
+    <method name="getOutputCompressorClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="defaultValue" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Get the {@link CompressionCodec} for compressing the job outputs.
+ @param conf the {@link JobConf} to look in
+ @param defaultValue the {@link CompressionCodec} to return if not set
+ @return the {@link CompressionCodec} to be used to compress the 
+         job outputs
+ @throws IllegalArgumentException if the class was specified, but not found]]>
+      </doc>
+    </method>
+    <method name="getRecordWriter" return="org.apache.hadoop.mapred.RecordWriter"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="ignored" type="org.apache.hadoop.fs.FileSystem"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="name" type="java.lang.String"/>
+      <param name="progress" type="org.apache.hadoop.util.Progressable"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="checkOutputSpecs"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="ignored" type="org.apache.hadoop.fs.FileSystem"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <exception name="FileAlreadyExistsException" type="org.apache.hadoop.mapred.FileAlreadyExistsException"/>
+      <exception name="InvalidJobConfException" type="org.apache.hadoop.mapred.InvalidJobConfException"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="setOutputPath"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="outputDir" type="org.apache.hadoop.fs.Path"/>
+      <doc>
+      <![CDATA[Set the {@link Path} of the output directory for the map-reduce job.
+
+ @param conf The configuration of the job.
+ @param outputDir the {@link Path} of the output directory for 
+ the map-reduce job.]]>
+      </doc>
+    </method>
+    <method name="getOutputPath" return="org.apache.hadoop.fs.Path"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <doc>
+      <![CDATA[Get the {@link Path} to the output directory for the map-reduce job.
+ 
+ @return the {@link Path} to the output directory for the map-reduce job.
+ @see FileOutputFormat#getWorkOutputPath(JobConf)]]>
+      </doc>
+    </method>
+    <method name="getWorkOutputPath" return="org.apache.hadoop.fs.Path"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <doc>
+      <![CDATA[Get the {@link Path} to the task's temporary output directory 
+  for the map-reduce job
+  
+ <b id="SideEffectFiles">Tasks' Side-Effect Files</b>
+ 
+ <p><i>Note:</i> The following is valid only if the {@link OutputCommitter}
+  is {@link FileOutputCommitter}. If <code>OutputCommitter</code> is not 
+  a <code>FileOutputCommitter</code>, the task's temporary output
+  directory is same as {@link #getOutputPath(JobConf)} i.e.
+  <tt>${mapreduce.output.fileoutputformat.outputdir}$</tt></p>
+  
+ <p>Some applications need to create/write-to side-files, which differ from
+ the actual job-outputs.
+ 
+ <p>In such cases there could be issues with 2 instances of the same TIP 
+ (running simultaneously e.g. speculative tasks) trying to open/write-to the
+ same file (path) on HDFS. Hence the application-writer will have to pick 
+ unique names per task-attempt (e.g. using the attemptid, say 
+ <tt>attempt_200709221812_0001_m_000000_0</tt>), not just per TIP.</p> 
+ 
+ <p>To get around this the Map-Reduce framework helps the application-writer 
+ out by maintaining a special 
+ <tt>${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid}</tt> 
+ sub-directory for each task-attempt on HDFS where the output of the 
+ task-attempt goes. On successful completion of the task-attempt the files 
+ in the <tt>${mapreduce.output.fileoutputformat.outputdir}/_temporary/_${taskid}</tt> (only) 
+ are <i>promoted</i> to <tt>${mapreduce.output.fileoutputformat.outputdir}</tt>. Of course, the 
+ framework discards the sub-directory of unsuccessful task-attempts. This 
+ is completely transparent to the application.</p>
+ 
+ <p>The application-writer can take advantage of this by creating any 
+ side-files required in <tt>${mapreduce.task.output.dir}</tt> during execution 
+ of his reduce-task i.e. via {@link #getWorkOutputPath(JobConf)}, and the 
+ framework will move them out similarly - thus she doesn't have to pick 
+ unique paths per task-attempt.</p>
+ 
+ <p><i>Note</i>: the value of <tt>${mapreduce.task.output.dir}</tt> during 
+ execution of a particular task-attempt is actually 
+ <tt>${mapreduce.output.fileoutputformat.outputdir}/_temporary/_{$taskid}</tt>, and this value is 
+ set by the map-reduce framework. So, just create any side-files in the 
+ path  returned by {@link #getWorkOutputPath(JobConf)} from map/reduce 
+ task to take advantage of this feature.</p>
+ 
+ <p>The entire discussion holds true for maps of jobs with 
+ reducer=NONE (i.e. 0 reduces) since output of the map, in that case, 
+ goes directly to HDFS.</p> 
+ 
+ @return the {@link Path} to the task's temporary output directory 
+ for the map-reduce job.]]>
+      </doc>
+    </method>
+    <method name="getTaskOutputPath" return="org.apache.hadoop.fs.Path"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="name" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Helper function to create the task's temporary output directory and 
+ return the path to the task's output file.
+ 
+ @param conf job-configuration
+ @param name temporary task-output filename
+ @return path to the task's temporary output file
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getUniqueName" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="name" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Helper function to generate a name that is unique for the task.
+
+ <p>The generated name can be used to create custom files from within the
+ different tasks for the job, the names for different tasks will not collide
+ with each other.</p>
+
+ <p>The given name is postfixed with the task type, 'm' for maps, 'r' for
+ reduces and the task partition number. For example, give a name 'test'
+ running on the first map o the job the generated name will be
+ 'test-m-00000'.</p>
+
+ @param conf the configuration for the job.
+ @param name the name to make unique.
+ @return a unique name accross all tasks of the job.]]>
+      </doc>
+    </method>
+    <method name="getPathForCustomFile" return="org.apache.hadoop.fs.Path"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="name" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Helper function to generate a {@link Path} for a file that is unique for
+ the task within the job output directory.
+
+ <p>The path can be used to create custom files from within the map and
+ reduce tasks. The path name will be unique for each task. The path parent
+ will be the job output directory.</p>ls
+
+ <p>This method uses the {@link #getUniqueName} method to make the file name
+ unique for the task.</p>
+
+ @param conf the configuration for the job.
+ @param name the name for the file.
+ @return a unique path accross all tasks of the job.]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[A base class for {@link OutputFormat}.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.FileOutputFormat -->
+  <!-- start class org.apache.hadoop.mapred.FileSplit -->
+  <class name="FileSplit" extends="org.apache.hadoop.mapreduce.InputSplit"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapred.InputSplitWithLocationInfo"/>
+    <constructor name="FileSplit"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+    </constructor>
+    <constructor name="FileSplit" type="org.apache.hadoop.fs.Path, long, long, org.apache.hadoop.mapred.JobConf"
+      static="false" final="false" visibility="public"
+      deprecated="deprecated, no comment">
+      <doc>
+      <![CDATA[Constructs a split.
+ @deprecated
+ @param file the file name
+ @param start the position of the first byte in the file to process
+ @param length the number of bytes in the file to process]]>
+      </doc>
+    </constructor>
+    <constructor name="FileSplit" type="org.apache.hadoop.fs.Path, long, long, java.lang.String[]"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Constructs a split with host information
+
+ @param file the file name
+ @param start the position of the first byte in the file to process
+ @param length the number of bytes in the file to process
+ @param hosts the list of hosts containing the block, possibly null]]>
+      </doc>
+    </constructor>
+    <constructor name="FileSplit" type="org.apache.hadoop.fs.Path, long, long, java.lang.String[], java.lang.String[]"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Constructs a split with host information
+
+ @param file the file name
+ @param start the position of the first byte in the file to process
+ @param length the number of bytes in the file to process
+ @param hosts the list of hosts containing the block, possibly null
+ @param inMemoryHosts the list of hosts containing the block in memory]]>
+      </doc>
+    </constructor>
+    <constructor name="FileSplit" type="org.apache.hadoop.mapreduce.lib.input.FileSplit"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getPath" return="org.apache.hadoop.fs.Path"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The file containing this split's data.]]>
+      </doc>
+    </method>
+    <method name="getStart" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The position of the first byte in the file to process.]]>
+      </doc>
+    </method>
+    <method name="getLength" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The number of bytes in the file to process.]]>
+      </doc>
+    </method>
+    <method name="toString" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="write"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="out" type="java.io.DataOutput"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="readFields"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="in" type="java.io.DataInput"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getLocations" return="java.lang.String[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getLocationInfo" return="org.apache.hadoop.mapred.SplitLocationInfo[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <doc>
+    <![CDATA[A section of an input file.  Returned by {@link
+ InputFormat#getSplits(JobConf, int)} and passed to
+ {@link InputFormat#getRecordReader(InputSplit,JobConf,Reporter)}.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.FileSplit -->
+  <!-- start class org.apache.hadoop.mapred.FixedLengthInputFormat -->
+  <class name="FixedLengthInputFormat" extends="org.apache.hadoop.mapred.FileInputFormat"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapred.JobConfigurable"/>
+    <constructor name="FixedLengthInputFormat"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="setRecordLength"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="recordLength" type="int"/>
+      <doc>
+      <![CDATA[Set the length of each record
+ @param conf configuration
+ @param recordLength the length of a record]]>
+      </doc>
+    </method>
+    <method name="getRecordLength" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <doc>
+      <![CDATA[Get record length value
+ @param conf configuration
+ @return the record length, zero means none was set]]>
+      </doc>
+    </method>
+    <method name="configure"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+    </method>
+    <method name="getRecordReader" return="org.apache.hadoop.mapred.RecordReader"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="genericSplit" type="org.apache.hadoop.mapred.InputSplit"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="reporter" type="org.apache.hadoop.mapred.Reporter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="isSplitable" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="fs" type="org.apache.hadoop.fs.FileSystem"/>
+      <param name="file" type="org.apache.hadoop.fs.Path"/>
+    </method>
+    <field name="FIXED_RECORD_LENGTH" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <doc>
+    <![CDATA[FixedLengthInputFormat is an input format used to read input files
+ which contain fixed length records.  The content of a record need not be
+ text.  It can be arbitrary binary data.  Users must configure the record
+ length property by calling:
+ FixedLengthInputFormat.setRecordLength(conf, recordLength);<br><br> or
+ conf.setInt(FixedLengthInputFormat.FIXED_RECORD_LENGTH, recordLength);
+ <br><br>
+ @see FixedLengthRecordReader]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.FixedLengthInputFormat -->
+  <!-- start class org.apache.hadoop.mapred.ID -->
+  <class name="ID" extends="org.apache.hadoop.mapreduce.ID"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="ID" type="int"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[constructs an ID object from the given int]]>
+      </doc>
+    </constructor>
+    <constructor name="ID"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+    </constructor>
+    <doc>
+    <![CDATA[A general identifier, which internally stores the id
+ as an integer. This is the super class of {@link JobID}, 
+ {@link TaskID} and {@link TaskAttemptID}.
+ 
+ @see JobID
+ @see TaskID
+ @see TaskAttemptID]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.ID -->
+  <!-- start interface org.apache.hadoop.mapred.InputFormat -->
+  <interface name="InputFormat"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="getSplits" return="org.apache.hadoop.mapred.InputSplit[]"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="numSplits" type="int"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Logically split the set of input files for the job.  
+ 
+ <p>Each {@link InputSplit} is then assigned to an individual {@link Mapper}
+ for processing.</p>
+
+ <p><i>Note</i>: The split is a <i>logical</i> split of the inputs and the
+ input files are not physically split into chunks. For e.g. a split could
+ be <i>&lt;input-file-path, start, offset&gt;</i> tuple.
+ 
+ @param job job configuration.
+ @param numSplits the desired number of splits, a hint.
+ @return an array of {@link InputSplit}s for the job.]]>
+      </doc>
+    </method>
+    <method name="getRecordReader" return="org.apache.hadoop.mapred.RecordReader"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="split" type="org.apache.hadoop.mapred.InputSplit"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="reporter" type="org.apache.hadoop.mapred.Reporter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get the {@link RecordReader} for the given {@link InputSplit}.
+
+ <p>It is the responsibility of the <code>RecordReader</code> to respect
+ record boundaries while processing the logical split to present a 
+ record-oriented view to the individual task.</p>
+ 
+ @param split the {@link InputSplit}
+ @param job the job that this split belongs to
+ @return a {@link RecordReader}]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[<code>InputFormat</code> describes the input-specification for a 
+ Map-Reduce job. 
+ 
+ <p>The Map-Reduce framework relies on the <code>InputFormat</code> of the
+ job to:<p>
+ <ol>
+   <li>
+   Validate the input-specification of the job. 
+   <li>
+   Split-up the input file(s) into logical {@link InputSplit}s, each of 
+   which is then assigned to an individual {@link Mapper}.
+   </li>
+   <li>
+   Provide the {@link RecordReader} implementation to be used to glean
+   input records from the logical <code>InputSplit</code> for processing by 
+   the {@link Mapper}.
+   </li>
+ </ol>
+ 
+ <p>The default behavior of file-based {@link InputFormat}s, typically 
+ sub-classes of {@link FileInputFormat}, is to split the 
+ input into <i>logical</i> {@link InputSplit}s based on the total size, in 
+ bytes, of the input files. However, the {@link FileSystem} blocksize of  
+ the input files is treated as an upper bound for input splits. A lower bound 
+ on the split size can be set via 
+ <a href="{@docRoot}/../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml#mapreduce.input.fileinputformat.split.minsize">
+ mapreduce.input.fileinputformat.split.minsize</a>.</p>
+ 
+ <p>Clearly, logical splits based on input-size is insufficient for many 
+ applications since record boundaries are to be respected. In such cases, the
+ application has to also implement a {@link RecordReader} on whom lies the
+ responsibilty to respect record-boundaries and present a record-oriented
+ view of the logical <code>InputSplit</code> to the individual task.
+
+ @see InputSplit
+ @see RecordReader
+ @see JobClient
+ @see FileInputFormat]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.InputFormat -->
+  <!-- start interface org.apache.hadoop.mapred.InputSplit -->
+  <interface name="InputSplit"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.io.Writable"/>
+    <method name="getLength" return="long"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get the total number of bytes in the data of the <code>InputSplit</code>.
+ 
+ @return the number of bytes in the input split.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getLocations" return="java.lang.String[]"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get the list of hostnames where the input split is located.
+ 
+ @return list of hostnames where data of the <code>InputSplit</code> is
+         located as an array of <code>String</code>s.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[<code>InputSplit</code> represents the data to be processed by an 
+ individual {@link Mapper}. 
+
+ <p>Typically, it presents a byte-oriented view on the input and is the 
+ responsibility of {@link RecordReader} of the job to process this and present
+ a record-oriented view.
+ 
+ @see InputFormat
+ @see RecordReader]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.InputSplit -->
+  <!-- start interface org.apache.hadoop.mapred.InputSplitWithLocationInfo -->
+  <interface name="InputSplitWithLocationInfo"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapred.InputSplit"/>
+    <method name="getLocationInfo" return="org.apache.hadoop.mapred.SplitLocationInfo[]"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Gets info about which nodes the input split is stored on and how it is
+ stored at each location.
+ 
+ @return list of <code>SplitLocationInfo</code>s describing how the split
+    data is stored at each location. A null value indicates that all the
+    locations have the data stored on disk.
+ @throws IOException]]>
+      </doc>
+    </method>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.InputSplitWithLocationInfo -->
+  <!-- start class org.apache.hadoop.mapred.InvalidFileTypeException -->
+  <class name="InvalidFileTypeException" extends="java.io.IOException"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="InvalidFileTypeException"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <constructor name="InvalidFileTypeException" type="java.lang.String"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <doc>
+    <![CDATA[Used when file type differs from the desired file type. like 
+ getting a file when a directory is expected. Or a wrong file type.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.InvalidFileTypeException -->
+  <!-- start class org.apache.hadoop.mapred.InvalidInputException -->
+  <class name="InvalidInputException" extends="java.io.IOException"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="InvalidInputException" type="java.util.List"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create the exception with the given list.
+ @param probs the list of problems to report. this list is not copied.]]>
+      </doc>
+    </constructor>
+    <method name="getProblems" return="java.util.List"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the complete list of the problems reported.
+ @return the list of problems, which must not be modified]]>
+      </doc>
+    </method>
+    <method name="getMessage" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get a summary message of the problems found.
+ @return the concatenated messages from all of the problems.]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[This class wraps a list of problems with the input, so that the user
+ can get a list of problems together instead of finding and fixing them one 
+ by one.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.InvalidInputException -->
+  <!-- start class org.apache.hadoop.mapred.InvalidJobConfException -->
+  <class name="InvalidJobConfException" extends="java.io.IOException"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="InvalidJobConfException"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <constructor name="InvalidJobConfException" type="java.lang.String"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <constructor name="InvalidJobConfException" type="java.lang.String, java.lang.Throwable"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <constructor name="InvalidJobConfException" type="java.lang.Throwable"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <doc>
+    <![CDATA[This exception is thrown when jobconf misses some mendatory attributes
+ or value of some attributes is invalid.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.InvalidJobConfException -->
+  <!-- start class org.apache.hadoop.mapred.JobClient -->
+  <class name="JobClient" extends="org.apache.hadoop.mapreduce.tools.CLI"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="java.lang.AutoCloseable"/>
+    <constructor name="JobClient"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create a job client.]]>
+      </doc>
+    </constructor>
+    <constructor name="JobClient" type="org.apache.hadoop.mapred.JobConf"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Build a job client with the given {@link JobConf}, and connect to the 
+ default cluster
+ 
+ @param conf the job configuration.
+ @throws IOException]]>
+      </doc>
+    </constructor>
+    <constructor name="JobClient" type="org.apache.hadoop.conf.Configuration"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Build a job client with the given {@link Configuration}, 
+ and connect to the default cluster
+ 
+ @param conf the configuration.
+ @throws IOException]]>
+      </doc>
+    </constructor>
+    <constructor name="JobClient" type="java.net.InetSocketAddress, org.apache.hadoop.conf.Configuration"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Build a job client, connect to the indicated job tracker.
+ 
+ @param jobTrackAddr the job tracker to connect to.
+ @param conf configuration.]]>
+      </doc>
+    </constructor>
+    <method name="init"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Connect to the default cluster
+ @param conf the job configuration.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="close"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Close the <code>JobClient</code>.]]>
+      </doc>
+    </method>
+    <method name="getFs" return="org.apache.hadoop.fs.FileSystem"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get a filesystem handle.  We need this to prepare jobs
+ for submission to the MapReduce system.
+ 
+ @return the filesystem handle.]]>
+      </doc>
+    </method>
+    <method name="getClusterHandle" return="org.apache.hadoop.mapreduce.Cluster"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get a handle to the Cluster]]>
+      </doc>
+    </method>
+    <method name="submitJob" return="org.apache.hadoop.mapred.RunningJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobFile" type="java.lang.String"/>
+      <exception name="FileNotFoundException" type="java.io.FileNotFoundException"/>
+      <exception name="InvalidJobConfException" type="org.apache.hadoop.mapred.InvalidJobConfException"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Submit a job to the MR system.
+ 
+ This returns a handle to the {@link RunningJob} which can be used to track
+ the running-job.
+ 
+ @param jobFile the job configuration.
+ @return a handle to the {@link RunningJob} which can be used to track the
+         running-job.
+ @throws FileNotFoundException
+ @throws InvalidJobConfException
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="submitJob" return="org.apache.hadoop.mapred.RunningJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <exception name="FileNotFoundException" type="java.io.FileNotFoundException"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Submit a job to the MR system.
+ This returns a handle to the {@link RunningJob} which can be used to track
+ the running-job.
+ 
+ @param conf the job configuration.
+ @return a handle to the {@link RunningJob} which can be used to track the
+         running-job.
+ @throws FileNotFoundException
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getJobInner" return="org.apache.hadoop.mapred.RunningJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="jobid" type="org.apache.hadoop.mapred.JobID"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getJob" return="org.apache.hadoop.mapred.RunningJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobid" type="org.apache.hadoop.mapred.JobID"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get an {@link RunningJob} object to track an ongoing job.  Returns
+ null if the id does not correspond to any known job.
+
+ @param jobid the jobid of the job.
+ @return the {@link RunningJob} handle to track the job, null if the
+         <code>jobid</code> doesn't correspond to any known job.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getJob" return="org.apache.hadoop.mapred.RunningJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="Applications should rather use {@link #getJob(JobID)}.">
+      <param name="jobid" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[@deprecated Applications should rather use {@link #getJob(JobID)}.]]>
+      </doc>
+    </method>
+    <method name="getMapTaskReports" return="org.apache.hadoop.mapred.TaskReport[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobId" type="org.apache.hadoop.mapred.JobID"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get the information of the current state of the map tasks of a job.
+ 
+ @param jobId the job to query.
+ @return the list of all of the map tips.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getMapTaskReports" return="org.apache.hadoop.mapred.TaskReport[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="Applications should rather use {@link #getMapTaskReports(JobID)}">
+      <param name="jobId" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[@deprecated Applications should rather use {@link #getMapTaskReports(JobID)}]]>
+      </doc>
+    </method>
+    <method name="getReduceTaskReports" return="org.apache.hadoop.mapred.TaskReport[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobId" type="org.apache.hadoop.mapred.JobID"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get the information of the current state of the reduce tasks of a job.
+ 
+ @param jobId the job to query.
+ @return the list of all of the reduce tips.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getCleanupTaskReports" return="org.apache.hadoop.mapred.TaskReport[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobId" type="org.apache.hadoop.mapred.JobID"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get the information of the current state of the cleanup tasks of a job.
+ 
+ @param jobId the job to query.
+ @return the list of all of the cleanup tips.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getSetupTaskReports" return="org.apache.hadoop.mapred.TaskReport[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobId" type="org.apache.hadoop.mapred.JobID"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get the information of the current state of the setup tasks of a job.
+ 
+ @param jobId the job to query.
+ @return the list of all of the setup tips.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getReduceTaskReports" return="org.apache.hadoop.mapred.TaskReport[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="Applications should rather use {@link #getReduceTaskReports(JobID)}">
+      <param name="jobId" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[@deprecated Applications should rather use {@link #getReduceTaskReports(JobID)}]]>
+      </doc>
+    </method>
+    <method name="displayTasks"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobId" type="org.apache.hadoop.mapred.JobID"/>
+      <param name="type" type="java.lang.String"/>
+      <param name="state" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Display the information about a job's tasks, of a particular type and
+ in a particular state
+ 
+ @param jobId the ID of the job
+ @param type the type of the task (map/reduce/setup/cleanup)
+ @param state the state of the task 
+ (pending/running/completed/failed/killed)
+ @throws IOException when there is an error communicating with the master
+ @throws IllegalArgumentException if an invalid type/state is passed]]>
+      </doc>
+    </method>
+    <method name="getClusterStatus" return="org.apache.hadoop.mapred.ClusterStatus"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get status information about the Map-Reduce cluster.
+  
+ @return the status information about the Map-Reduce cluster as an object
+         of {@link ClusterStatus}.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getClusterStatus" return="org.apache.hadoop.mapred.ClusterStatus"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="detailed" type="boolean"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get status information about the Map-Reduce cluster.
+  
+ @param  detailed if true then get a detailed status including the
+         tracker names
+ @return the status information about the Map-Reduce cluster as an object
+         of {@link ClusterStatus}.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="jobsToComplete" return="org.apache.hadoop.mapred.JobStatus[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get the jobs that are not completed and not failed.
+ 
+ @return array of {@link JobStatus} for the running/to-be-run jobs.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getAllJobs" return="org.apache.hadoop.mapred.JobStatus[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get the jobs that are submitted.
+ 
+ @return array of {@link JobStatus} for the submitted jobs.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="runJob" return="org.apache.hadoop.mapred.RunningJob"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Utility that submits a job, then polls for progress until the job is
+ complete.
+ 
+ @param job the job configuration.
+ @throws IOException if the job fails]]>
+      </doc>
+    </method>
+    <method name="monitorAndPrintJob" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="job" type="org.apache.hadoop.mapred.RunningJob"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <exception name="InterruptedException" type="java.lang.InterruptedException"/>
+      <doc>
+      <![CDATA[Monitor a job and print status in real-time as progress is made and tasks 
+ fail.
+ @param conf the job's configuration
+ @param job the job to track
+ @return true if the job succeeded
+ @throws IOException if communication to the JobTracker fails]]>
+      </doc>
+    </method>
+    <method name="setTaskOutputFilter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="newValue" type="org.apache.hadoop.mapred.JobClient.TaskStatusFilter"/>
+      <doc>
+      <![CDATA[Sets the output filter for tasks. only those tasks are printed whose
+ output matches the filter. 
+ @param newValue task filter.]]>
+      </doc>
+    </method>
+    <method name="getTaskOutputFilter" return="org.apache.hadoop.mapred.JobClient.TaskStatusFilter"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <doc>
+      <![CDATA[Get the task output filter out of the JobConf.
+ 
+ @param job the JobConf to examine.
+ @return the filter level.]]>
+      </doc>
+    </method>
+    <method name="setTaskOutputFilter"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="newValue" type="org.apache.hadoop.mapred.JobClient.TaskStatusFilter"/>
+      <doc>
+      <![CDATA[Modify the JobConf to set the task output filter.
+ 
+ @param job the JobConf to modify.
+ @param newValue the value to set.]]>
+      </doc>
+    </method>
+    <method name="getTaskOutputFilter" return="org.apache.hadoop.mapred.JobClient.TaskStatusFilter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Returns task output filter.
+ @return task filter.]]>
+      </doc>
+    </method>
+    <method name="getCounter" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="cntrs" type="org.apache.hadoop.mapreduce.Counters"/>
+      <param name="counterGroupName" type="java.lang.String"/>
+      <param name="counterName" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getDefaultMaps" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get status information about the max available Maps in the cluster.
+  
+ @return the max available Maps in the cluster
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getDefaultReduces" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get status information about the max available Reduces in the cluster.
+  
+ @return the max available Reduces in the cluster
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getSystemDir" return="org.apache.hadoop.fs.Path"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Grab the jobtracker system directory path where job-specific files are to be placed.
+ 
+ @return the system directory where job-specific files are to be placed.]]>
+      </doc>
+    </method>
+    <method name="isJobDirValid" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobDirPath" type="org.apache.hadoop.fs.Path"/>
+      <param name="fs" type="org.apache.hadoop.fs.FileSystem"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Checks if the job directory is clean and has all the required components
+ for (re) starting the job]]>
+      </doc>
+    </method>
+    <method name="getStagingAreaDir" return="org.apache.hadoop.fs.Path"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Fetch the staging area directory for the application
+ 
+ @return path to staging area directory
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getRootQueues" return="org.apache.hadoop.mapred.JobQueueInfo[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Returns an array of queue information objects about root level queues
+ configured
+
+ @return the array of root level JobQueueInfo objects
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getChildQueues" return="org.apache.hadoop.mapred.JobQueueInfo[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="queueName" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Returns an array of queue information objects about immediate children
+ of queue queueName.
+ 
+ @param queueName
+ @return the array of immediate children JobQueueInfo objects
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getQueues" return="org.apache.hadoop.mapred.JobQueueInfo[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Return an array of queue information objects about all the Job Queues
+ configured.
+ 
+ @return Array of JobQueueInfo objects
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getJobsFromQueue" return="org.apache.hadoop.mapred.JobStatus[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="queueName" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Gets all the jobs which were added to particular Job Queue
+ 
+ @param queueName name of the Job Queue
+ @return Array of jobs present in the job queue
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getQueueInfo" return="org.apache.hadoop.mapred.JobQueueInfo"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="queueName" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Gets the queue information associated to a particular Job Queue
+ 
+ @param queueName name of the job queue.
+ @return Queue information associated to particular queue.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getQueueAclsForCurrentUser" return="org.apache.hadoop.mapred.QueueAclsInfo[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Gets the Queue ACLs for current user
+ @return array of QueueAclsInfo object for current user.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getDelegationToken" return="org.apache.hadoop.security.token.Token"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="renewer" type="org.apache.hadoop.io.Text"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <exception name="InterruptedException" type="java.lang.InterruptedException"/>
+      <doc>
+      <![CDATA[Get a delegation token for the user from the JobTracker.
+ @param renewer the user who can renew the token
+ @return the new token
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="renewDelegationToken" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="Use {@link Token#renew} instead">
+      <param name="token" type="org.apache.hadoop.security.token.Token"/>
+      <exception name="SecretManager.InvalidToken" type="org.apache.hadoop.security.token.SecretManager.InvalidToken"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <exception name="InterruptedException" type="java.lang.InterruptedException"/>
+      <doc>
+      <![CDATA[Renew a delegation token
+ @param token the token to renew
+ @return true if the renewal went well
+ @throws InvalidToken
+ @throws IOException
+ @deprecated Use {@link Token#renew} instead]]>
+      </doc>
+    </method>
+    <method name="cancelDelegationToken"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="Use {@link Token#cancel} instead">
+      <param name="token" type="org.apache.hadoop.security.token.Token"/>
+      <exception name="SecretManager.InvalidToken" type="org.apache.hadoop.security.token.SecretManager.InvalidToken"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <exception name="InterruptedException" type="java.lang.InterruptedException"/>
+      <doc>
+      <![CDATA[Cancel a delegation token from the JobTracker
+ @param token the token to cancel
+ @throws IOException
+ @deprecated Use {@link Token#cancel} instead]]>
+      </doc>
+    </method>
+    <method name="main"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="argv" type="java.lang.String[]"/>
+      <exception name="Exception" type="java.lang.Exception"/>
+    </method>
+    <field name="MAPREDUCE_CLIENT_RETRY_POLICY_ENABLED_KEY" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="MAPREDUCE_CLIENT_RETRY_POLICY_ENABLED_DEFAULT" type="boolean"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="MAPREDUCE_CLIENT_RETRY_POLICY_SPEC_KEY" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="MAPREDUCE_CLIENT_RETRY_POLICY_SPEC_DEFAULT" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <doc>
+    <![CDATA[<code>JobClient</code> is the primary interface for the user-job to interact
+ with the cluster.
+ 
+ <code>JobClient</code> provides facilities to submit jobs, track their 
+ progress, access component-tasks' reports/logs, get the Map-Reduce cluster
+ status information etc.
+ 
+ <p>The job submission process involves:
+ <ol>
+   <li>
+   Checking the input and output specifications of the job.
+   </li>
+   <li>
+   Computing the {@link InputSplit}s for the job.
+   </li>
+   <li>
+   Setup the requisite accounting information for the {@link DistributedCache} 
+   of the job, if necessary.
+   </li>
+   <li>
+   Copying the job's jar and configuration to the map-reduce system directory 
+   on the distributed file-system. 
+   </li>
+   <li>
+   Submitting the job to the cluster and optionally monitoring
+   it's status.
+   </li>
+ </ol>
+  
+ Normally the user creates the application, describes various facets of the
+ job via {@link JobConf} and then uses the <code>JobClient</code> to submit 
+ the job and monitor its progress.
+ 
+ <p>Here is an example on how to use <code>JobClient</code>:</p>
+ <p><blockquote><pre>
+     // Create a new JobConf
+     JobConf job = new JobConf(new Configuration(), MyJob.class);
+     
+     // Specify various job-specific parameters     
+     job.setJobName("myjob");
+     
+     job.setInputPath(new Path("in"));
+     job.setOutputPath(new Path("out"));
+     
+     job.setMapperClass(MyJob.MyMapper.class);
+     job.setReducerClass(MyJob.MyReducer.class);
+
+     // Submit the job, then poll for progress until the job is complete
+     JobClient.runJob(job);
+ </pre></blockquote>
+ 
+ <b id="JobControl">Job Control</b>
+ 
+ <p>At times clients would chain map-reduce jobs to accomplish complex tasks 
+ which cannot be done via a single map-reduce job. This is fairly easy since 
+ the output of the job, typically, goes to distributed file-system and that 
+ can be used as the input for the next job.</p>
+ 
+ <p>However, this also means that the onus on ensuring jobs are complete 
+ (success/failure) lies squarely on the clients. In such situations the 
+ various job-control options are:
+ <ol>
+   <li>
+   {@link #runJob(JobConf)} : submits the job and returns only after 
+   the job has completed.
+   </li>
+   <li>
+   {@link #submitJob(JobConf)} : only submits the job, then poll the 
+   returned handle to the {@link RunningJob} to query status and make 
+   scheduling decisions.
+   </li>
+   <li>
+   {@link JobConf#setJobEndNotificationURI(String)} : setup a notification
+   on job-completion, thus avoiding polling.
+   </li>
+ </ol>
+ 
+ @see JobConf
+ @see ClusterStatus
+ @see Tool
+ @see DistributedCache]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.JobClient -->
+  <!-- start class org.apache.hadoop.mapred.JobConf -->
+  <class name="JobConf" extends="org.apache.hadoop.conf.Configuration"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="JobConf"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Construct a map/reduce job configuration.]]>
+      </doc>
+    </constructor>
+    <constructor name="JobConf" type="java.lang.Class"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Construct a map/reduce job configuration.
+ 
+ @param exampleClass a class whose containing jar is used as the job's jar.]]>
+      </doc>
+    </constructor>
+    <constructor name="JobConf" type="org.apache.hadoop.conf.Configuration"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Construct a map/reduce job configuration.
+ 
+ @param conf a Configuration whose settings will be inherited.]]>
+      </doc>
+    </constructor>
+    <constructor name="JobConf" type="org.apache.hadoop.conf.Configuration, java.lang.Class"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Construct a map/reduce job configuration.
+ 
+ @param conf a Configuration whose settings will be inherited.
+ @param exampleClass a class whose containing jar is used as the job's jar.]]>
+      </doc>
+    </constructor>
+    <constructor name="JobConf" type="java.lang.String"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Construct a map/reduce configuration.
+
+ @param config a Configuration-format XML job description file.]]>
+      </doc>
+    </constructor>
+    <constructor name="JobConf" type="org.apache.hadoop.fs.Path"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Construct a map/reduce configuration.
+
+ @param config a Configuration-format XML job description file.]]>
+      </doc>
+    </constructor>
+    <constructor name="JobConf" type="boolean"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[A new map/reduce configuration where the behavior of reading from the
+ default resources can be turned off.
+ <p>
+ If the parameter {@code loadDefaults} is false, the new instance
+ will not load resources from the default files.
+
+ @param loadDefaults specifies whether to load from the default files]]>
+      </doc>
+    </constructor>
+    <method name="getCredentials" return="org.apache.hadoop.security.Credentials"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get credentials for the job.
+ @return credentials for the job]]>
+      </doc>
+    </method>
+    <method name="getJar" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the user jar for the map-reduce job.
+ 
+ @return the user jar for the map-reduce job.]]>
+      </doc>
+    </method>
+    <method name="setJar"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jar" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the user jar for the map-reduce job.
+ 
+ @param jar the user jar for the map-reduce job.]]>
+      </doc>
+    </method>
+    <method name="getJarUnpackPattern" return="java.util.regex.Pattern"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the pattern for jar contents to unpack on the tasktracker]]>
+      </doc>
+    </method>
+    <method name="setJarByClass"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="cls" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the job's jar file by finding an example class location.
+ 
+ @param cls the example class.]]>
+      </doc>
+    </method>
+    <method name="getLocalDirs" return="java.lang.String[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="deleteLocalFiles"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Use MRAsyncDiskService.moveAndDeleteAllVolumes instead.]]>
+      </doc>
+    </method>
+    <method name="deleteLocalFiles"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="subdir" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getLocalPath" return="org.apache.hadoop.fs.Path"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="pathString" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Constructs a local file name. Files are distributed among configured
+ local directories.]]>
+      </doc>
+    </method>
+    <method name="getUser" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the reported username for this job.
+ 
+ @return the username]]>
+      </doc>
+    </method>
+    <method name="setUser"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="user" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the reported username for this job.
+ 
+ @param user the username for this job.]]>
+      </doc>
+    </method>
+    <method name="setKeepFailedTaskFiles"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="keep" type="boolean"/>
+      <doc>
+      <![CDATA[Set whether the framework should keep the intermediate files for 
+ failed tasks.
+ 
+ @param keep <code>true</code> if framework should keep the intermediate files 
+             for failed tasks, <code>false</code> otherwise.]]>
+      </doc>
+    </method>
+    <method name="getKeepFailedTaskFiles" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Should the temporary files for failed tasks be kept?
+ 
+ @return should the files be kept?]]>
+      </doc>
+    </method>
+    <method name="setKeepTaskFilesPattern"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="pattern" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set a regular expression for task names that should be kept. 
+ The regular expression ".*_m_000123_0" would keep the files
+ for the first instance of map 123 that ran.
+ 
+ @param pattern the java.util.regex.Pattern to match against the 
+        task names.]]>
+      </doc>
+    </method>
+    <method name="getKeepTaskFilesPattern" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the regular expression that is matched against the task names
+ to see if we need to keep the files.
+ 
+ @return the pattern as a string, if it was set, othewise null.]]>
+      </doc>
+    </method>
+    <method name="setWorkingDirectory"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="dir" type="org.apache.hadoop.fs.Path"/>
+      <doc>
+      <![CDATA[Set the current working directory for the default file system.
+ 
+ @param dir the new current working directory.]]>
+      </doc>
+    </method>
+    <method name="getWorkingDirectory" return="org.apache.hadoop.fs.Path"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the current working directory for the default file system.
+ 
+ @return the directory name.]]>
+      </doc>
+    </method>
+    <method name="setNumTasksToExecutePerJvm"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="numTasks" type="int"/>
+      <doc>
+      <![CDATA[Sets the number of tasks that a spawned task JVM should run
+ before it exits
+ @param numTasks the number of tasks to execute; defaults to 1;
+ -1 signifies no limit]]>
+      </doc>
+    </method>
+    <method name="getNumTasksToExecutePerJvm" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the number of tasks that a spawned JVM should execute]]>
+      </doc>
+    </method>
+    <method name="getInputFormat" return="org.apache.hadoop.mapred.InputFormat"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the {@link InputFormat} implementation for the map-reduce job,
+ defaults to {@link TextInputFormat} if not specified explicity.
+ 
+ @return the {@link InputFormat} implementation for the map-reduce job.]]>
+      </doc>
+    </method>
+    <method name="setInputFormat"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the {@link InputFormat} implementation for the map-reduce job.
+ 
+ @param theClass the {@link InputFormat} implementation for the map-reduce 
+                 job.]]>
+      </doc>
+    </method>
+    <method name="getOutputFormat" return="org.apache.hadoop.mapred.OutputFormat"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the {@link OutputFormat} implementation for the map-reduce job,
+ defaults to {@link TextOutputFormat} if not specified explicity.
+ 
+ @return the {@link OutputFormat} implementation for the map-reduce job.]]>
+      </doc>
+    </method>
+    <method name="getOutputCommitter" return="org.apache.hadoop.mapred.OutputCommitter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the {@link OutputCommitter} implementation for the map-reduce job,
+ defaults to {@link FileOutputCommitter} if not specified explicitly.
+ 
+ @return the {@link OutputCommitter} implementation for the map-reduce job.]]>
+      </doc>
+    </method>
+    <method name="setOutputCommitter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the {@link OutputCommitter} implementation for the map-reduce job.
+ 
+ @param theClass the {@link OutputCommitter} implementation for the map-reduce 
+                 job.]]>
+      </doc>
+    </method>
+    <method name="setOutputFormat"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the {@link OutputFormat} implementation for the map-reduce job.
+ 
+ @param theClass the {@link OutputFormat} implementation for the map-reduce 
+                 job.]]>
+      </doc>
+    </method>
+    <method name="setCompressMapOutput"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="compress" type="boolean"/>
+      <doc>
+      <![CDATA[Should the map outputs be compressed before transfer?
+ 
+ @param compress should the map outputs be compressed?]]>
+      </doc>
+    </method>
+    <method name="getCompressMapOutput" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Are the outputs of the maps be compressed?
+ 
+ @return <code>true</code> if the outputs of the maps are to be compressed,
+         <code>false</code> otherwise.]]>
+      </doc>
+    </method>
+    <method name="setMapOutputCompressorClass"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="codecClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the given class as the  {@link CompressionCodec} for the map outputs.
+ 
+ @param codecClass the {@link CompressionCodec} class that will compress  
+                   the map outputs.]]>
+      </doc>
+    </method>
+    <method name="getMapOutputCompressorClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="defaultValue" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Get the {@link CompressionCodec} for compressing the map outputs.
+ 
+ @param defaultValue the {@link CompressionCodec} to return if not set
+ @return the {@link CompressionCodec} class that should be used to compress the 
+         map outputs.
+ @throws IllegalArgumentException if the class was specified, but not found]]>
+      </doc>
+    </method>
+    <method name="getMapOutputKeyClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the key class for the map output data. If it is not set, use the
+ (final) output key class. This allows the map output key class to be
+ different than the final output key class.
+  
+ @return the map output key class.]]>
+      </doc>
+    </method>
+    <method name="setMapOutputKeyClass"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the key class for the map output data. This allows the user to
+ specify the map output key class to be different than the final output
+ value class.
+ 
+ @param theClass the map output key class.]]>
+      </doc>
+    </method>
+    <method name="getMapOutputValueClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the value class for the map output data. If it is not set, use the
+ (final) output value class This allows the map output value class to be
+ different than the final output value class.
+  
+ @return the map output value class.]]>
+      </doc>
+    </method>
+    <method name="setMapOutputValueClass"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the value class for the map output data. This allows the user to
+ specify the map output value class to be different than the final output
+ value class.
+ 
+ @param theClass the map output value class.]]>
+      </doc>
+    </method>
+    <method name="getOutputKeyClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the key class for the job output data.
+ 
+ @return the key class for the job output data.]]>
+      </doc>
+    </method>
+    <method name="setOutputKeyClass"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the key class for the job output data.
+ 
+ @param theClass the key class for the job output data.]]>
+      </doc>
+    </method>
+    <method name="getOutputKeyComparator" return="org.apache.hadoop.io.RawComparator"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the {@link RawComparator} comparator used to compare keys.
+ 
+ @return the {@link RawComparator} comparator used to compare keys.]]>
+      </doc>
+    </method>
+    <method name="setOutputKeyComparatorClass"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the {@link RawComparator} comparator used to compare keys.
+ 
+ @param theClass the {@link RawComparator} comparator used to 
+                 compare keys.
+ @see #setOutputValueGroupingComparator(Class)]]>
+      </doc>
+    </method>
+    <method name="setKeyFieldComparatorOptions"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="keySpec" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the {@link KeyFieldBasedComparator} options used to compare keys.
+ 
+ @param keySpec the key specification of the form -k pos1[,pos2], where,
+  pos is of the form f[.c][opts], where f is the number
+  of the key field to use, and c is the number of the first character from
+  the beginning of the field. Fields and character posns are numbered 
+  starting with 1; a character position of zero in pos2 indicates the
+  field's last character. If '.c' is omitted from pos1, it defaults to 1
+  (the beginning of the field); if omitted from pos2, it defaults to 0 
+  (the end of the field). opts are ordering options. The supported options
+  are:
+    -n, (Sort numerically)
+    -r, (Reverse the result of comparison)]]>
+      </doc>
+    </method>
+    <method name="getKeyFieldComparatorOption" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the {@link KeyFieldBasedComparator} options]]>
+      </doc>
+    </method>
+    <method name="setKeyFieldPartitionerOptions"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="keySpec" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the {@link KeyFieldBasedPartitioner} options used for 
+ {@link Partitioner}
+ 
+ @param keySpec the key specification of the form -k pos1[,pos2], where,
+  pos is of the form f[.c][opts], where f is the number
+  of the key field to use, and c is the number of the first character from
+  the beginning of the field. Fields and character posns are numbered 
+  starting with 1; a character position of zero in pos2 indicates the
+  field's last character. If '.c' is omitted from pos1, it defaults to 1
+  (the beginning of the field); if omitted from pos2, it defaults to 0 
+  (the end of the field).]]>
+      </doc>
+    </method>
+    <method name="getKeyFieldPartitionerOption" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the {@link KeyFieldBasedPartitioner} options]]>
+      </doc>
+    </method>
+    <method name="getCombinerKeyGroupingComparator" return="org.apache.hadoop.io.RawComparator"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the user defined {@link WritableComparable} comparator for
+ grouping keys of inputs to the combiner.
+
+ @return comparator set by the user for grouping values.
+ @see #setCombinerKeyGroupingComparator(Class) for details.]]>
+      </doc>
+    </method>
+    <method name="getOutputValueGroupingComparator" return="org.apache.hadoop.io.RawComparator"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the user defined {@link WritableComparable} comparator for 
+ grouping keys of inputs to the reduce.
+ 
+ @return comparator set by the user for grouping values.
+ @see #setOutputValueGroupingComparator(Class) for details.]]>
+      </doc>
+    </method>
+    <method name="setCombinerKeyGroupingComparator"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the user defined {@link RawComparator} comparator for
+ grouping keys in the input to the combiner.
+
+ <p>This comparator should be provided if the equivalence rules for keys
+ for sorting the intermediates are different from those for grouping keys
+ before each call to
+ {@link Reducer#reduce(Object, java.util.Iterator, OutputCollector, Reporter)}.</p>
+
+ <p>For key-value pairs (K1,V1) and (K2,V2), the values (V1, V2) are passed
+ in a single call to the reduce function if K1 and K2 compare as equal.</p>
+
+ <p>Since {@link #setOutputKeyComparatorClass(Class)} can be used to control
+ how keys are sorted, this can be used in conjunction to simulate
+ <i>secondary sort on values</i>.</p>
+
+ <p><i>Note</i>: This is not a guarantee of the combiner sort being
+ <i>stable</i> in any sense. (In any case, with the order of available
+ map-outputs to the combiner being non-deterministic, it wouldn't make
+ that much sense.)</p>
+
+ @param theClass the comparator class to be used for grouping keys for the
+ combiner. It should implement <code>RawComparator</code>.
+ @see #setOutputKeyComparatorClass(Class)]]>
+      </doc>
+    </method>
+    <method name="setOutputValueGroupingComparator"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the user defined {@link RawComparator} comparator for 
+ grouping keys in the input to the reduce.
+ 
+ <p>This comparator should be provided if the equivalence rules for keys
+ for sorting the intermediates are different from those for grouping keys
+ before each call to 
+ {@link Reducer#reduce(Object, java.util.Iterator, OutputCollector, Reporter)}.</p>
+  
+ <p>For key-value pairs (K1,V1) and (K2,V2), the values (V1, V2) are passed
+ in a single call to the reduce function if K1 and K2 compare as equal.</p>
+ 
+ <p>Since {@link #setOutputKeyComparatorClass(Class)} can be used to control 
+ how keys are sorted, this can be used in conjunction to simulate 
+ <i>secondary sort on values</i>.</p>
+  
+ <p><i>Note</i>: This is not a guarantee of the reduce sort being 
+ <i>stable</i> in any sense. (In any case, with the order of available 
+ map-outputs to the reduce being non-deterministic, it wouldn't make 
+ that much sense.)</p>
+ 
+ @param theClass the comparator class to be used for grouping keys. 
+                 It should implement <code>RawComparator</code>.
+ @see #setOutputKeyComparatorClass(Class)
+ @see #setCombinerKeyGroupingComparator(Class)]]>
+      </doc>
+    </method>
+    <method name="getUseNewMapper" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Should the framework use the new context-object code for running
+ the mapper?
+ @return true, if the new api should be used]]>
+      </doc>
+    </method>
+    <method name="setUseNewMapper"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="flag" type="boolean"/>
+      <doc>
+      <![CDATA[Set whether the framework should use the new api for the mapper.
+ This is the default for jobs submitted with the new Job api.
+ @param flag true, if the new api should be used]]>
+      </doc>
+    </method>
+    <method name="getUseNewReducer" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Should the framework use the new context-object code for running
+ the reducer?
+ @return true, if the new api should be used]]>
+      </doc>
+    </method>
+    <method name="setUseNewReducer"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="flag" type="boolean"/>
+      <doc>
+      <![CDATA[Set whether the framework should use the new api for the reducer. 
+ This is the default for jobs submitted with the new Job api.
+ @param flag true, if the new api should be used]]>
+      </doc>
+    </method>
+    <method name="getOutputValueClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the value class for job outputs.
+ 
+ @return the value class for job outputs.]]>
+      </doc>
+    </method>
+    <method name="setOutputValueClass"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the value class for job outputs.
+ 
+ @param theClass the value class for job outputs.]]>
+      </doc>
+    </method>
+    <method name="getMapperClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the {@link Mapper} class for the job.
+ 
+ @return the {@link Mapper} class for the job.]]>
+      </doc>
+    </method>
+    <method name="setMapperClass"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the {@link Mapper} class for the job.
+ 
+ @param theClass the {@link Mapper} class for the job.]]>
+      </doc>
+    </method>
+    <method name="getMapRunnerClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the {@link MapRunnable} class for the job.
+ 
+ @return the {@link MapRunnable} class for the job.]]>
+      </doc>
+    </method>
+    <method name="setMapRunnerClass"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Expert: Set the {@link MapRunnable} class for the job.
+ 
+ Typically used to exert greater control on {@link Mapper}s.
+ 
+ @param theClass the {@link MapRunnable} class for the job.]]>
+      </doc>
+    </method>
+    <method name="getPartitionerClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the {@link Partitioner} used to partition {@link Mapper}-outputs 
+ to be sent to the {@link Reducer}s.
+ 
+ @return the {@link Partitioner} used to partition map-outputs.]]>
+      </doc>
+    </method>
+    <method name="setPartitionerClass"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the {@link Partitioner} class used to partition 
+ {@link Mapper}-outputs to be sent to the {@link Reducer}s.
+ 
+ @param theClass the {@link Partitioner} used to partition map-outputs.]]>
+      </doc>
+    </method>
+    <method name="getReducerClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the {@link Reducer} class for the job.
+ 
+ @return the {@link Reducer} class for the job.]]>
+      </doc>
+    </method>
+    <method name="setReducerClass"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the {@link Reducer} class for the job.
+ 
+ @param theClass the {@link Reducer} class for the job.]]>
+      </doc>
+    </method>
+    <method name="getCombinerClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the user-defined <i>combiner</i> class used to combine map-outputs 
+ before being sent to the reducers. Typically the combiner is same as the
+ the {@link Reducer} for the job i.e. {@link #getReducerClass()}.
+ 
+ @return the user-defined combiner class used to combine map-outputs.]]>
+      </doc>
+    </method>
+    <method name="setCombinerClass"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the user-defined <i>combiner</i> class used to combine map-outputs 
+ before being sent to the reducers. 
+ 
+ <p>The combiner is an application-specified aggregation operation, which
+ can help cut down the amount of data transferred between the 
+ {@link Mapper} and the {@link Reducer}, leading to better performance.</p>
+ 
+ <p>The framework may invoke the combiner 0, 1, or multiple times, in both
+ the mapper and reducer tasks. In general, the combiner is called as the
+ sort/merge result is written to disk. The combiner must:
+ <ul>
+   <li> be side-effect free</li>
+   <li> have the same input and output key types and the same input and 
+        output value types</li>
+ </ul>
+ 
+ <p>Typically the combiner is same as the <code>Reducer</code> for the  
+ job i.e. {@link #setReducerClass(Class)}.</p>
+ 
+ @param theClass the user-defined combiner class used to combine 
+                 map-outputs.]]>
+      </doc>
+    </method>
+    <method name="getSpeculativeExecution" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Should speculative execution be used for this job? 
+ Defaults to <code>true</code>.
+ 
+ @return <code>true</code> if speculative execution be used for this job,
+         <code>false</code> otherwise.]]>
+      </doc>
+    </method>
+    <method name="setSpeculativeExecution"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="speculativeExecution" type="boolean"/>
+      <doc>
+      <![CDATA[Turn speculative execution on or off for this job. 
+ 
+ @param speculativeExecution <code>true</code> if speculative execution 
+                             should be turned on, else <code>false</code>.]]>
+      </doc>
+    </method>
+    <method name="getMapSpeculativeExecution" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Should speculative execution be used for this job for map tasks? 
+ Defaults to <code>true</code>.
+ 
+ @return <code>true</code> if speculative execution be 
+                           used for this job for map tasks,
+         <code>false</code> otherwise.]]>
+      </doc>
+    </method>
+    <method name="setMapSpeculativeExecution"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="speculativeExecution" type="boolean"/>
+      <doc>
+      <![CDATA[Turn speculative execution on or off for this job for map tasks. 
+ 
+ @param speculativeExecution <code>true</code> if speculative execution 
+                             should be turned on for map tasks,
+                             else <code>false</code>.]]>
+      </doc>
+    </method>
+    <method name="getReduceSpeculativeExecution" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Should speculative execution be used for this job for reduce tasks? 
+ Defaults to <code>true</code>.
+ 
+ @return <code>true</code> if speculative execution be used 
+                           for reduce tasks for this job,
+         <code>false</code> otherwise.]]>
+      </doc>
+    </method>
+    <method name="setReduceSpeculativeExecution"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="speculativeExecution" type="boolean"/>
+      <doc>
+      <![CDATA[Turn speculative execution on or off for this job for reduce tasks. 
+ 
+ @param speculativeExecution <code>true</code> if speculative execution 
+                             should be turned on for reduce tasks,
+                             else <code>false</code>.]]>
+      </doc>
+    </method>
+    <method name="getNumMapTasks" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the configured number of map tasks for this job.
+ Defaults to <code>1</code>.
+ 
+ @return the number of map tasks for this job.]]>
+      </doc>
+    </method>
+    <method name="setNumMapTasks"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="n" type="int"/>
+      <doc>
+      <![CDATA[Set the number of map tasks for this job.
+ 
+ <p><i>Note</i>: This is only a <i>hint</i> to the framework. The actual 
+ number of spawned map tasks depends on the number of {@link InputSplit}s 
+ generated by the job's {@link InputFormat#getSplits(JobConf, int)}.
+  
+ A custom {@link InputFormat} is typically used to accurately control 
+ the number of map tasks for the job.</p>
+ 
+ <b id="NoOfMaps">How many maps?</b>
+ 
+ <p>The number of maps is usually driven by the total size of the inputs 
+ i.e. total number of blocks of the input files.</p>
+  
+ <p>The right level of parallelism for maps seems to be around 10-100 maps 
+ per-node, although it has been set up to 300 or so for very cpu-light map 
+ tasks. Task setup takes awhile, so it is best if the maps take at least a 
+ minute to execute.</p>
+ 
+ <p>The default behavior of file-based {@link InputFormat}s is to split the 
+ input into <i>logical</i> {@link InputSplit}s based on the total size, in 
+ bytes, of input files. However, the {@link FileSystem} blocksize of the 
+ input files is treated as an upper bound for input splits. A lower bound 
+ on the split size can be set via 
+ <a href="{@docRoot}/../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml#mapreduce.input.fileinputformat.split.minsize">
+ mapreduce.input.fileinputformat.split.minsize</a>.</p>
+  
+ <p>Thus, if you expect 10TB of input data and have a blocksize of 128MB, 
+ you'll end up with 82,000 maps, unless {@link #setNumMapTasks(int)} is 
+ used to set it even higher.</p>
+ 
+ @param n the number of map tasks for this job.
+ @see InputFormat#getSplits(JobConf, int)
+ @see FileInputFormat
+ @see FileSystem#getDefaultBlockSize()
+ @see FileStatus#getBlockSize()]]>
+      </doc>
+    </method>
+    <method name="getNumReduceTasks" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the configured number of reduce tasks for this job. Defaults to
+ <code>1</code>.
+
+ @return the number of reduce tasks for this job.]]>
+      </doc>
+    </method>
+    <method name="setNumReduceTasks"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="n" type="int"/>
+      <doc>
+      <![CDATA[Set the requisite number of reduce tasks for this job.
+ 
+ <b id="NoOfReduces">How many reduces?</b>
+ 
+ <p>The right number of reduces seems to be <code>0.95</code> or 
+ <code>1.75</code> multiplied by (
+ <i>available memory for reduce tasks</i>
+ (The value of this should be smaller than
+ numNodes * yarn.nodemanager.resource.memory-mb
+ since the resource of memory is shared by map tasks and other
+ applications) /
+ <a href="{@docRoot}/../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml#mapreduce.reduce.memory.mb">
+ mapreduce.reduce.memory.mb</a>).
+ </p>
+ 
+ <p>With <code>0.95</code> all of the reduces can launch immediately and 
+ start transfering map outputs as the maps finish. With <code>1.75</code> 
+ the faster nodes will finish their first round of reduces and launch a 
+ second wave of reduces doing a much better job of load balancing.</p>
+ 
+ <p>Increasing the number of reduces increases the framework overhead, but 
+ increases load balancing and lowers the cost of failures.</p>
+ 
+ <p>The scaling factors above are slightly less than whole numbers to 
+ reserve a few reduce slots in the framework for speculative-tasks, failures
+ etc.</p> 
+
+ <b id="ReducerNone">Reducer NONE</b>
+ 
+ <p>It is legal to set the number of reduce-tasks to <code>zero</code>.</p>
+ 
+ <p>In this case the output of the map-tasks directly go to distributed 
+ file-system, to the path set by 
+ {@link FileOutputFormat#setOutputPath(JobConf, Path)}. Also, the 
+ framework doesn't sort the map-outputs before writing it out to HDFS.</p>
+ 
+ @param n the number of reduce tasks for this job.]]>
+      </doc>
+    </method>
+    <method name="getMaxMapAttempts" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the configured number of maximum attempts that will be made to run a
+ map task, as specified by the <code>mapreduce.map.maxattempts</code>
+ property. If this property is not already set, the default is 4 attempts.
+  
+ @return the max number of attempts per map task.]]>
+      </doc>
+    </method>
+    <method name="setMaxMapAttempts"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="n" type="int"/>
+      <doc>
+      <![CDATA[Expert: Set the number of maximum attempts that will be made to run a
+ map task.
+ 
+ @param n the number of attempts per map task.]]>
+      </doc>
+    </method>
+    <method name="getMaxReduceAttempts" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the configured number of maximum attempts  that will be made to run a
+ reduce task, as specified by the <code>mapreduce.reduce.maxattempts</code>
+ property. If this property is not already set, the default is 4 attempts.
+ 
+ @return the max number of attempts per reduce task.]]>
+      </doc>
+    </method>
+    <method name="setMaxReduceAttempts"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="n" type="int"/>
+      <doc>
+      <![CDATA[Expert: Set the number of maximum attempts that will be made to run a
+ reduce task.
+ 
+ @param n the number of attempts per reduce task.]]>
+      </doc>
+    </method>
+    <method name="getJobName" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the user-specified job name. This is only used to identify the 
+ job to the user.
+ 
+ @return the job's name, defaulting to "".]]>
+      </doc>
+    </method>
+    <method name="setJobName"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="name" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the user-specified job name.
+ 
+ @param name the job's new name.]]>
+      </doc>
+    </method>
+    <method name="getSessionId" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the user-specified session identifier. The default is the empty string.
+
+ The session identifier is used to tag metric data that is reported to some
+ performance metrics system via the org.apache.hadoop.metrics API.  The 
+ session identifier is intended, in particular, for use by Hadoop-On-Demand 
+ (HOD) which allocates a virtual Hadoop cluster dynamically and transiently. 
+ HOD will set the session identifier by modifying the mapred-site.xml file 
+ before starting the cluster.
+
+ When not running under HOD, this identifer is expected to remain set to 
+ the empty string.
+
+ @return the session identifier, defaulting to "".]]>
+      </doc>
+    </method>
+    <method name="setSessionId"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="sessionId" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the user-specified session identifier.  
+
+ @param sessionId the new session id.]]>
+      </doc>
+    </method>
+    <method name="setMaxTaskFailuresPerTracker"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="noFailures" type="int"/>
+      <doc>
+      <![CDATA[Set the maximum no. of failures of a given job per tasktracker.
+ If the no. of task failures exceeds <code>noFailures</code>, the 
+ tasktracker is <i>blacklisted</i> for this job. 
+ 
+ @param noFailures maximum no. of failures of a given job per tasktracker.]]>
+      </doc>
+    </method>
+    <method name="getMaxTaskFailuresPerTracker" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Expert: Get the maximum no. of failures of a given job per tasktracker.
+ If the no. of task failures exceeds this, the tasktracker is
+ <i>blacklisted</i> for this job. 
+ 
+ @return the maximum no. of failures of a given job per tasktracker.]]>
+      </doc>
+    </method>
+    <method name="getMaxMapTaskFailuresPercent" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the maximum percentage of map tasks that can fail without 
+ the job being aborted. 
+ 
+ Each map task is executed a minimum of {@link #getMaxMapAttempts()} 
+ attempts before being declared as <i>failed</i>.
+  
+ Defaults to <code>zero</code>, i.e. <i>any</i> failed map-task results in
+ the job being declared as {@link JobStatus#FAILED}.
+ 
+ @return the maximum percentage of map tasks that can fail without
+         the job being aborted.]]>
+      </doc>
+    </method>
+    <method name="setMaxMapTaskFailuresPercent"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="percent" type="int"/>
+      <doc>
+      <![CDATA[Expert: Set the maximum percentage of map tasks that can fail without the
+ job being aborted. 
+ 
+ Each map task is executed a minimum of {@link #getMaxMapAttempts} attempts 
+ before being declared as <i>failed</i>.
+ 
+ @param percent the maximum percentage of map tasks that can fail without 
+                the job being aborted.]]>
+      </doc>
+    </method>
+    <method name="getMaxReduceTaskFailuresPercent" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the maximum percentage of reduce tasks that can fail without 
+ the job being aborted. 
+ 
+ Each reduce task is executed a minimum of {@link #getMaxReduceAttempts()} 
+ attempts before being declared as <i>failed</i>.
+ 
+ Defaults to <code>zero</code>, i.e. <i>any</i> failed reduce-task results 
+ in the job being declared as {@link JobStatus#FAILED}.
+ 
+ @return the maximum percentage of reduce tasks that can fail without
+         the job being aborted.]]>
+      </doc>
+    </method>
+    <method name="setMaxReduceTaskFailuresPercent"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="percent" type="int"/>
+      <doc>
+      <![CDATA[Set the maximum percentage of reduce tasks that can fail without the job
+ being aborted.
+ 
+ Each reduce task is executed a minimum of {@link #getMaxReduceAttempts()} 
+ attempts before being declared as <i>failed</i>.
+ 
+ @param percent the maximum percentage of reduce tasks that can fail without 
+                the job being aborted.]]>
+      </doc>
+    </method>
+    <method name="setJobPriority"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="prio" type="org.apache.hadoop.mapred.JobPriority"/>
+      <doc>
+      <![CDATA[Set {@link JobPriority} for this job.
+
+ @param prio the {@link JobPriority} for this job.]]>
+      </doc>
+    </method>
+    <method name="setJobPriorityAsInteger"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="prio" type="int"/>
+      <doc>
+      <![CDATA[Set {@link JobPriority} for this job.
+
+ @param prio the {@link JobPriority} for this job.]]>
+      </doc>
+    </method>
+    <method name="getJobPriority" return="org.apache.hadoop.mapred.JobPriority"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the {@link JobPriority} for this job.
+
+ @return the {@link JobPriority} for this job.]]>
+      </doc>
+    </method>
+    <method name="getJobPriorityAsInteger" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the priority for this job.
+
+ @return the priority for this job.]]>
+      </doc>
+    </method>
+    <method name="getProfileEnabled" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get whether the task profiling is enabled.
+ @return true if some tasks will be profiled]]>
+      </doc>
+    </method>
+    <method name="setProfileEnabled"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="newValue" type="boolean"/>
+      <doc>
+      <![CDATA[Set whether the system should collect profiler information for some of 
+ the tasks in this job? The information is stored in the user log 
+ directory.
+ @param newValue true means it should be gathered]]>
+      </doc>
+    </method>
+    <method name="getProfileParams" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the profiler configuration arguments.
+
+ The default value for this property is
+ "-agentlib:hprof=cpu=samples,heap=sites,force=n,thread=y,verbose=n,file=%s"
+ 
+ @return the parameters to pass to the task child to configure profiling]]>
+      </doc>
+    </method>
+    <method name="setProfileParams"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="value" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the profiler configuration arguments. If the string contains a '%s' it
+ will be replaced with the name of the profiling output file when the task
+ runs.
+
+ This value is passed to the task child JVM on the command line.
+
+ @param value the configuration string]]>
+      </doc>
+    </method>
+    <method name="getProfileTaskRange" return="org.apache.hadoop.conf.Configuration.IntegerRanges"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="isMap" type="boolean"/>
+      <doc>
+      <![CDATA[Get the range of maps or reduces to profile.
+ @param isMap is the task a map?
+ @return the task ranges]]>
+      </doc>
+    </method>
+    <method name="setProfileTaskRange"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="isMap" type="boolean"/>
+      <param name="newValue" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the ranges of maps or reduces to profile. setProfileEnabled(true) 
+ must also be called.
+ @param newValue a set of integer ranges of the map ids]]>
+      </doc>
+    </method>
+    <method name="setMapDebugScript"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="mDbgScript" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the debug script to run when the map tasks fail.
+ 
+ <p>The debug script can aid debugging of failed map tasks. The script is 
+ given task's stdout, stderr, syslog, jobconf files as arguments.</p>
+ 
+ <p>The debug command, run on the node where the map failed, is:</p>
+ <p><blockquote><pre>
+ $script $stdout $stderr $syslog $jobconf.
+ </pre></blockquote>
+ 
+ <p> The script file is distributed through {@link DistributedCache} 
+ APIs. The script needs to be symlinked. </p>
+ 
+ <p>Here is an example on how to submit a script 
+ <p><blockquote><pre>
+ job.setMapDebugScript("./myscript");
+ DistributedCache.createSymlink(job);
+ DistributedCache.addCacheFile("/debug/scripts/myscript#myscript");
+ </pre></blockquote>
+ 
+ @param mDbgScript the script name]]>
+      </doc>
+    </method>
+    <method name="getMapDebugScript" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the map task's debug script.
+ 
+ @return the debug Script for the mapred job for failed map tasks.
+ @see #setMapDebugScript(String)]]>
+      </doc>
+    </method>
+    <method name="setReduceDebugScript"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="rDbgScript" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the debug script to run when the reduce tasks fail.
+ 
+ <p>The debug script can aid debugging of failed reduce tasks. The script
+ is given task's stdout, stderr, syslog, jobconf files as arguments.</p>
+ 
+ <p>The debug command, run on the node where the map failed, is:</p>
+ <p><blockquote><pre>
+ $script $stdout $stderr $syslog $jobconf.
+ </pre></blockquote>
+ 
+ <p> The script file is distributed through {@link DistributedCache} 
+ APIs. The script file needs to be symlinked </p>
+ 
+ <p>Here is an example on how to submit a script 
+ <p><blockquote><pre>
+ job.setReduceDebugScript("./myscript");
+ DistributedCache.createSymlink(job);
+ DistributedCache.addCacheFile("/debug/scripts/myscript#myscript");
+ </pre></blockquote>
+ 
+ @param rDbgScript the script name]]>
+      </doc>
+    </method>
+    <method name="getReduceDebugScript" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the reduce task's debug Script
+ 
+ @return the debug script for the mapred job for failed reduce tasks.
+ @see #setReduceDebugScript(String)]]>
+      </doc>
+    </method>
+    <method name="getJobEndNotificationURI" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the uri to be invoked in-order to send a notification after the job 
+ has completed (success/failure). 
+ 
+ @return the job end notification uri, <code>null</code> if it hasn't
+         been set.
+ @see #setJobEndNotificationURI(String)]]>
+      </doc>
+    </method>
+    <method name="setJobEndNotificationURI"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="uri" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the uri to be invoked in-order to send a notification after the job
+ has completed (success/failure).
+ 
+ <p>The uri can contain 2 special parameters: <tt>$jobId</tt> and 
+ <tt>$jobStatus</tt>. Those, if present, are replaced by the job's 
+ identifier and completion-status respectively.</p>
+ 
+ <p>This is typically used by application-writers to implement chaining of 
+ Map-Reduce jobs in an <i>asynchronous manner</i>.</p>
+ 
+ @param uri the job end notification uri
+ @see JobStatus]]>
+      </doc>
+    </method>
+    <method name="getJobEndNotificationCustomNotifierClass" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Returns the class to be invoked in order to send a notification
+ after the job has completed (success/failure).
+
+ @return the fully-qualified name of the class which implements
+ {@link org.apache.hadoop.mapreduce.CustomJobEndNotifier} set through the
+ {@link org.apache.hadoop.mapreduce.MRJobConfig#MR_JOB_END_NOTIFICATION_CUSTOM_NOTIFIER_CLASS}
+ property
+
+ @see JobConf#setJobEndNotificationCustomNotifierClass(java.lang.String)
+ @see org.apache.hadoop.mapreduce.MRJobConfig#MR_JOB_END_NOTIFICATION_CUSTOM_NOTIFIER_CLASS]]>
+      </doc>
+    </method>
+    <method name="setJobEndNotificationCustomNotifierClass"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="customNotifierClassName" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Sets the class to be invoked in order to send a notification after the job
+ has completed (success/failure).
+
+ A notification url still has to be set which will be passed to
+ {@link org.apache.hadoop.mapreduce.CustomJobEndNotifier#notifyOnce(
+ java.net.URL, org.apache.hadoop.conf.Configuration)}
+ along with the Job's conf.
+
+ If this is set instead of using a simple HttpURLConnection
+ we'll create a new instance of this class
+ which should be an implementation of
+ {@link org.apache.hadoop.mapreduce.CustomJobEndNotifier},
+ and we'll invoke that.
+
+ @param customNotifierClassName the fully-qualified name of the class
+     which implements
+     {@link org.apache.hadoop.mapreduce.CustomJobEndNotifier}
+
+ @see JobConf#setJobEndNotificationURI(java.lang.String)
+ @see
+ org.apache.hadoop.mapreduce.MRJobConfig#MR_JOB_END_NOTIFICATION_CUSTOM_NOTIFIER_CLASS]]>
+      </doc>
+    </method>
+    <method name="getJobLocalDir" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get job-specific shared directory for use as scratch space
+ 
+ <p>
+ When a job starts, a shared directory is created at location
+ <code>
+ ${mapreduce.cluster.local.dir}/taskTracker/$user/jobcache/$jobid/work/ </code>.
+ This directory is exposed to the users through 
+ <code>mapreduce.job.local.dir </code>.
+ So, the tasks can use this space 
+ as scratch space and share files among them. </p>
+ This value is available as System property also.
+ 
+ @return The localized job specific shared directory]]>
+      </doc>
+    </method>
+    <method name="getMemoryForMapTask" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get memory required to run a map task of the job, in MB.
+ 
+ If a value is specified in the configuration, it is returned.
+ Else, it returns {@link JobContext#DEFAULT_MAP_MEMORY_MB}.
+ <p>
+ For backward compatibility, if the job configuration sets the
+ key {@link #MAPRED_TASK_MAXVMEM_PROPERTY} to a value different
+ from {@link #DISABLED_MEMORY_LIMIT}, that value will be used
+ after converting it from bytes to MB.
+ @return memory required to run a map task of the job, in MB,]]>
+      </doc>
+    </method>
+    <method name="setMemoryForMapTask"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="mem" type="long"/>
+    </method>
+    <method name="getMemoryForReduceTask" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get memory required to run a reduce task of the job, in MB.
+ 
+ If a value is specified in the configuration, it is returned.
+ Else, it returns {@link JobContext#DEFAULT_REDUCE_MEMORY_MB}.
+ <p>
+ For backward compatibility, if the job configuration sets the
+ key {@link #MAPRED_TASK_MAXVMEM_PROPERTY} to a value different
+ from {@link #DISABLED_MEMORY_LIMIT}, that value will be used
+ after converting it from bytes to MB.
+ @return memory required to run a reduce task of the job, in MB.]]>
+      </doc>
+    </method>
+    <method name="setMemoryForReduceTask"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="mem" type="long"/>
+    </method>
+    <method name="getQueueName" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Return the name of the queue to which this job is submitted.
+ Defaults to 'default'.
+ 
+ @return name of the queue]]>
+      </doc>
+    </method>
+    <method name="setQueueName"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="queueName" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the name of the queue to which this job should be submitted.
+ 
+ @param queueName Name of the queue]]>
+      </doc>
+    </method>
+    <method name="normalizeMemoryConfigValue" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="val" type="long"/>
+      <doc>
+      <![CDATA[Normalize the negative values in configuration
+ 
+ @param val
+ @return normalized value]]>
+      </doc>
+    </method>
+    <method name="findContainingJar" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="my_class" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Find a jar that contains a class of the same name, if any.
+ It will return a jar file, even if that is not the first thing
+ on the class path that has a class with the same name.
+ 
+ @param my_class the class to find.
+ @return a jar file that contains the class, or null.]]>
+      </doc>
+    </method>
+    <method name="getMaxVirtualMemoryForTask" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="Use {@link #getMemoryForMapTask()} and
+             {@link #getMemoryForReduceTask()}">
+      <doc>
+      <![CDATA[Get the memory required to run a task of this job, in bytes. See
+ {@link #MAPRED_TASK_MAXVMEM_PROPERTY}
+ <p>
+ This method is deprecated. Now, different memory limits can be
+ set for map and reduce tasks of a job, in MB. 
+ <p>
+ For backward compatibility, if the job configuration sets the
+ key {@link #MAPRED_TASK_MAXVMEM_PROPERTY}, that value is returned. 
+ Otherwise, this method will return the larger of the values returned by 
+ {@link #getMemoryForMapTask()} and {@link #getMemoryForReduceTask()}
+ after converting them into bytes.
+
+ @return Memory required to run a task of this job, in bytes.
+ @see #setMaxVirtualMemoryForTask(long)
+ @deprecated Use {@link #getMemoryForMapTask()} and
+             {@link #getMemoryForReduceTask()}]]>
+      </doc>
+    </method>
+    <method name="setMaxVirtualMemoryForTask"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="Use {@link #setMemoryForMapTask(long mem)}  and
+  Use {@link #setMemoryForReduceTask(long mem)}">
+      <param name="vmem" type="long"/>
+      <doc>
+      <![CDATA[Set the maximum amount of memory any task of this job can use. See
+ {@link #MAPRED_TASK_MAXVMEM_PROPERTY}
+ <p>
+ mapred.task.maxvmem is split into
+ mapreduce.map.memory.mb
+ and mapreduce.map.memory.mb,mapred
+ each of the new key are set
+ as mapred.task.maxvmem / 1024
+ as new values are in MB
+
+ @param vmem Maximum amount of virtual memory in bytes any task of this job
+             can use.
+ @see #getMaxVirtualMemoryForTask()
+ @deprecated
+  Use {@link #setMemoryForMapTask(long mem)}  and
+  Use {@link #setMemoryForReduceTask(long mem)}]]>
+      </doc>
+    </method>
+    <method name="getMaxPhysicalMemoryForTask" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="this variable is deprecated and nolonger in use.">
+      <doc>
+      <![CDATA[@deprecated this variable is deprecated and nolonger in use.]]>
+      </doc>
+    </method>
+    <method name="setMaxPhysicalMemoryForTask"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="mem" type="long"/>
+    </method>
+    <method name="main"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="args" type="java.lang.String[]"/>
+      <exception name="Exception" type="java.lang.Exception"/>
+    </method>
+    <field name="MAPRED_TASK_MAXVMEM_PROPERTY" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="Use {@link #MAPREDUCE_JOB_MAP_MEMORY_MB_PROPERTY} and
+ {@link #MAPREDUCE_JOB_REDUCE_MEMORY_MB_PROPERTY}">
+      <doc>
+      <![CDATA[@deprecated Use {@link #MAPREDUCE_JOB_MAP_MEMORY_MB_PROPERTY} and
+ {@link #MAPREDUCE_JOB_REDUCE_MEMORY_MB_PROPERTY}]]>
+      </doc>
+    </field>
+    <field name="UPPER_LIMIT_ON_TASK_VMEM_PROPERTY" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="deprecated, no comment">
+      <doc>
+      <![CDATA[@deprecated]]>
+      </doc>
+    </field>
+    <field name="MAPRED_TASK_DEFAULT_MAXVMEM_PROPERTY" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="deprecated, no comment">
+      <doc>
+      <![CDATA[@deprecated]]>
+      </doc>
+    </field>
+    <field name="MAPRED_TASK_MAXPMEM_PROPERTY" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="deprecated, no comment">
+      <doc>
+      <![CDATA[@deprecated]]>
+      </doc>
+    </field>
+    <field name="DISABLED_MEMORY_LIMIT" type="long"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[A value which if set for memory related configuration options,
+ indicates that the options are turned off.
+ Deprecated because it makes no sense in the context of MR2.]]>
+      </doc>
+    </field>
+    <field name="MAPRED_LOCAL_DIR_PROPERTY" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Property name for the configuration property mapreduce.cluster.local.dir]]>
+      </doc>
+    </field>
+    <field name="DEFAULT_QUEUE_NAME" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Name of the queue to which jobs will be submitted, if no queue
+ name is mentioned.]]>
+      </doc>
+    </field>
+    <field name="MAPRED_JOB_MAP_MEMORY_MB_PROPERTY" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The variable is kept for M/R 1.x applications, while M/R 2.x applications
+ should use {@link #MAPREDUCE_JOB_MAP_MEMORY_MB_PROPERTY}]]>
+      </doc>
+    </field>
+    <field name="MAPRED_JOB_REDUCE_MEMORY_MB_PROPERTY" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The variable is kept for M/R 1.x applications, while M/R 2.x applications
+ should use {@link #MAPREDUCE_JOB_REDUCE_MEMORY_MB_PROPERTY}]]>
+      </doc>
+    </field>
+    <field name="UNPACK_JAR_PATTERN_DEFAULT" type="java.util.regex.Pattern"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Pattern for the default unpacking behavior for job jars]]>
+      </doc>
+    </field>
+    <field name="MAPRED_TASK_JAVA_OPTS" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="Use {@link #MAPRED_MAP_TASK_JAVA_OPTS} or 
+                 {@link #MAPRED_REDUCE_TASK_JAVA_OPTS}">
+      <doc>
+      <![CDATA[Configuration key to set the java command line options for the child
+ map and reduce tasks.
+ 
+ Java opts for the task tracker child processes.
+ The following symbol, if present, will be interpolated: @taskid@. 
+ It is replaced by current TaskID. Any other occurrences of '@' will go 
+ unchanged.
+ For example, to enable verbose gc logging to a file named for the taskid in
+ /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of:
+          -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc
+ 
+ The configuration variable {@link #MAPRED_TASK_ENV} can be used to pass 
+ other environment variables to the child processes.
+ 
+ @deprecated Use {@link #MAPRED_MAP_TASK_JAVA_OPTS} or 
+                 {@link #MAPRED_REDUCE_TASK_JAVA_OPTS}]]>
+      </doc>
+    </field>
+    <field name="MAPRED_MAP_TASK_JAVA_OPTS" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Configuration key to set the java command line options for the map tasks.
+ 
+ Java opts for the task tracker child map processes.
+ The following symbol, if present, will be interpolated: @taskid@. 
+ It is replaced by current TaskID. Any other occurrences of '@' will go 
+ unchanged.
+ For example, to enable verbose gc logging to a file named for the taskid in
+ /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of:
+          -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc
+ 
+ The configuration variable {@link #MAPRED_MAP_TASK_ENV} can be used to pass 
+ other environment variables to the map processes.]]>
+      </doc>
+    </field>
+    <field name="MAPRED_REDUCE_TASK_JAVA_OPTS" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Configuration key to set the java command line options for the reduce tasks.
+ 
+ Java opts for the task tracker child reduce processes.
+ The following symbol, if present, will be interpolated: @taskid@. 
+ It is replaced by current TaskID. Any other occurrences of '@' will go 
+ unchanged.
+ For example, to enable verbose gc logging to a file named for the taskid in
+ /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of:
+          -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc
+ 
+ The configuration variable {@link #MAPRED_REDUCE_TASK_ENV} can be used to 
+ pass process environment variables to the reduce processes.]]>
+      </doc>
+    </field>
+    <field name="DEFAULT_MAPRED_TASK_JAVA_OPTS" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="MAPRED_TASK_ULIMIT" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="Configuration key to set the maximum virtual memory available to the child
+ map and reduce tasks (in kilo-bytes). This has been deprecated and will no
+ longer have any effect.">
+      <doc>
+      <![CDATA[@deprecated
+ Configuration key to set the maximum virtual memory available to the child
+ map and reduce tasks (in kilo-bytes). This has been deprecated and will no
+ longer have any effect.]]>
+      </doc>
+    </field>
+    <field name="MAPRED_MAP_TASK_ULIMIT" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="Configuration key to set the maximum virtual memory available to the
+ map tasks (in kilo-bytes). This has been deprecated and will no
+ longer have any effect.">
+      <doc>
+      <![CDATA[@deprecated
+ Configuration key to set the maximum virtual memory available to the
+ map tasks (in kilo-bytes). This has been deprecated and will no
+ longer have any effect.]]>
+      </doc>
+    </field>
+    <field name="MAPRED_REDUCE_TASK_ULIMIT" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="Configuration key to set the maximum virtual memory available to the
+ reduce tasks (in kilo-bytes). This has been deprecated and will no
+ longer have any effect.">
+      <doc>
+      <![CDATA[@deprecated
+ Configuration key to set the maximum virtual memory available to the
+ reduce tasks (in kilo-bytes). This has been deprecated and will no
+ longer have any effect.]]>
+      </doc>
+    </field>
+    <field name="MAPRED_TASK_ENV" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="Use {@link #MAPRED_MAP_TASK_ENV} or 
+                 {@link #MAPRED_REDUCE_TASK_ENV}">
+      <doc>
+      <![CDATA[Configuration key to set the environment of the child map/reduce tasks.
+ 
+ The format of the value is <code>k1=v1,k2=v2</code>. Further it can 
+ reference existing environment variables via <code>$key</code> on
+ Linux or <code>%key%</code> on Windows.
+ 
+ Example:
+ <ul>
+   <li> A=foo - This will set the env variable A to foo. </li>
+ </ul>
+ 
+ @deprecated Use {@link #MAPRED_MAP_TASK_ENV} or 
+                 {@link #MAPRED_REDUCE_TASK_ENV}]]>
+      </doc>
+    </field>
+    <field name="MAPRED_MAP_TASK_ENV" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Configuration key to set the environment of the child map tasks.
+ 
+ The format of the value is <code>k1=v1,k2=v2</code>. Further it can
+ reference existing environment variables via <code>$key</code> on
+ Linux or <code>%key%</code> on Windows.
+ 
+ Example:
+ <ul>
+   <li> A=foo - This will set the env variable A to foo. </li>
+ </ul>
+
+ You can also add environment variables individually by appending
+ <code>.VARNAME</code> to this configuration key, where VARNAME is
+ the name of the environment variable.
+
+ Example:
+ <ul>
+   <li>mapreduce.map.env.VARNAME=value</li>
+ </ul>]]>
+      </doc>
+    </field>
+    <field name="MAPRED_REDUCE_TASK_ENV" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Configuration key to set the environment of the child reduce tasks.
+ 
+ The format of the value is <code>k1=v1,k2=v2</code>. Further it can 
+ reference existing environment variables via <code>$key</code> on
+ Linux or <code>%key%</code> on Windows.
+ 
+ Example:
+ <ul>
+   <li> A=foo - This will set the env variable A to foo. </li>
+ </ul>
+
+ You can also add environment variables individually by appending
+ <code>.VARNAME</code> to this configuration key, where VARNAME is
+ the name of the environment variable.
+
+ Example:
+ <ul>
+   <li>mapreduce.reduce.env.VARNAME=value</li>
+ </ul>]]>
+      </doc>
+    </field>
+    <field name="MAPRED_MAP_TASK_LOG_LEVEL" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Configuration key to set the logging level for the map task.
+
+ The allowed logging levels are:
+ OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL.]]>
+      </doc>
+    </field>
+    <field name="MAPRED_REDUCE_TASK_LOG_LEVEL" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Configuration key to set the logging level for the reduce task.
+
+ The allowed logging levels are:
+ OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE and ALL.]]>
+      </doc>
+    </field>
+    <field name="DEFAULT_LOG_LEVEL" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Default logging level for map/reduce tasks.]]>
+      </doc>
+    </field>
+    <field name="WORKFLOW_ID" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ use {@link MRJobConfig#WORKFLOW_ID} instead]]>
+      </doc>
+    </field>
+    <field name="WORKFLOW_NAME" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ use {@link MRJobConfig#WORKFLOW_NAME} instead]]>
+      </doc>
+    </field>
+    <field name="WORKFLOW_NODE_NAME" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ use {@link MRJobConfig#WORKFLOW_NODE_NAME} instead]]>
+      </doc>
+    </field>
+    <field name="WORKFLOW_ADJACENCY_PREFIX_STRING" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ use {@link MRJobConfig#WORKFLOW_ADJACENCY_PREFIX_STRING} instead]]>
+      </doc>
+    </field>
+    <field name="WORKFLOW_ADJACENCY_PREFIX_PATTERN" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ use {@link MRJobConfig#WORKFLOW_ADJACENCY_PREFIX_PATTERN} instead]]>
+      </doc>
+    </field>
+    <field name="WORKFLOW_TAGS" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ use {@link MRJobConfig#WORKFLOW_TAGS} instead]]>
+      </doc>
+    </field>
+    <field name="MAPREDUCE_RECOVER_JOB" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ not use it]]>
+      </doc>
+    </field>
+    <field name="DEFAULT_MAPREDUCE_RECOVER_JOB" type="boolean"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The variable is kept for M/R 1.x applications, M/R 2.x applications should
+ not use it]]>
+      </doc>
+    </field>
+    <doc>
+    <![CDATA[A map/reduce job configuration.
+ 
+ <p><code>JobConf</code> is the primary interface for a user to describe a 
+ map-reduce job to the Hadoop framework for execution. The framework tries to
+ faithfully execute the job as-is described by <code>JobConf</code>, however:
+ <ol>
+   <li>
+   Some configuration parameters might have been marked as 
+   <a href="{@docRoot}/org/apache/hadoop/conf/Configuration.html#FinalParams">
+   final</a> by administrators and hence cannot be altered.
+   </li>
+   <li>
+   While some job parameters are straight-forward to set 
+   (e.g. {@link #setNumReduceTasks(int)}), some parameters interact subtly
+   with the rest of the framework and/or job-configuration and is relatively
+   more complex for the user to control finely
+   (e.g. {@link #setNumMapTasks(int)}).
+   </li>
+ </ol>
+ 
+ <p><code>JobConf</code> typically specifies the {@link Mapper}, combiner 
+ (if any), {@link Partitioner}, {@link Reducer}, {@link InputFormat} and 
+ {@link OutputFormat} implementations to be used etc.
+
+ <p>Optionally <code>JobConf</code> is used to specify other advanced facets 
+ of the job such as <code>Comparator</code>s to be used, files to be put in  
+ the {@link DistributedCache}, whether or not intermediate and/or job outputs 
+ are to be compressed (and how), debugability via user-provided scripts 
+ ( {@link #setMapDebugScript(String)}/{@link #setReduceDebugScript(String)}),
+ for doing post-processing on task logs, task's stdout, stderr, syslog. 
+ and etc.</p>
+ 
+ <p>Here is an example on how to configure a job via <code>JobConf</code>:</p>
+ <p><blockquote><pre>
+     // Create a new JobConf
+     JobConf job = new JobConf(new Configuration(), MyJob.class);
+     
+     // Specify various job-specific parameters     
+     job.setJobName("myjob");
+     
+     FileInputFormat.setInputPaths(job, new Path("in"));
+     FileOutputFormat.setOutputPath(job, new Path("out"));
+     
+     job.setMapperClass(MyJob.MyMapper.class);
+     job.setCombinerClass(MyJob.MyReducer.class);
+     job.setReducerClass(MyJob.MyReducer.class);
+     
+     job.setInputFormat(SequenceFileInputFormat.class);
+     job.setOutputFormat(SequenceFileOutputFormat.class);
+ </pre></blockquote>
+ 
+ @see JobClient
+ @see ClusterStatus
+ @see Tool
+ @see DistributedCache]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.JobConf -->
+  <!-- start interface org.apache.hadoop.mapred.JobConfigurable -->
+  <interface name="JobConfigurable"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="configure"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <doc>
+      <![CDATA[Initializes a new instance from a {@link JobConf}.
+
+ @param job the configuration]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[That what may be configured.]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.JobConfigurable -->
+  <!-- start interface org.apache.hadoop.mapred.JobContext -->
+  <interface name="JobContext"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapreduce.JobContext"/>
+    <method name="getJobConf" return="org.apache.hadoop.mapred.JobConf"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the job Configuration
+ 
+ @return JobConf]]>
+      </doc>
+    </method>
+    <method name="getProgressible" return="org.apache.hadoop.util.Progressable"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the progress mechanism for reporting progress.
+ 
+ @return progress mechanism]]>
+      </doc>
+    </method>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.JobContext -->
+  <!-- start class org.apache.hadoop.mapred.JobID -->
+  <class name="JobID" extends="org.apache.hadoop.mapreduce.JobID"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="JobID" type="java.lang.String, int"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Constructs a JobID object 
+ @param jtIdentifier jobTracker identifier
+ @param id job number]]>
+      </doc>
+    </constructor>
+    <constructor name="JobID"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="downgrade" return="org.apache.hadoop.mapred.JobID"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="old" type="org.apache.hadoop.mapreduce.JobID"/>
+      <doc>
+      <![CDATA[Downgrade a new JobID to an old one
+ @param old a new or old JobID
+ @return either old or a new JobID build to match old]]>
+      </doc>
+    </method>
+    <method name="read" return="org.apache.hadoop.mapred.JobID"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="in" type="java.io.DataInput"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="forName" return="org.apache.hadoop.mapred.JobID"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="str" type="java.lang.String"/>
+      <exception name="IllegalArgumentException" type="java.lang.IllegalArgumentException"/>
+      <doc>
+      <![CDATA[Construct a JobId object from given string 
+ @return constructed JobId object or null if the given String is null
+ @throws IllegalArgumentException if the given string is malformed]]>
+      </doc>
+    </method>
+    <method name="getJobIDsPattern" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jtIdentifier" type="java.lang.String"/>
+      <param name="jobId" type="java.lang.Integer"/>
+      <doc>
+      <![CDATA[Returns a regex pattern which matches task IDs. Arguments can 
+ be given null, in which case that part of the regex will be generic.  
+ For example to obtain a regex matching <i>any job</i> 
+ run on the jobtracker started at <i>200707121733</i>, we would use :
+ <pre> 
+ JobID.getTaskIDsPattern("200707121733", null);
+ </pre>
+ which will return :
+ <pre> "job_200707121733_[0-9]*" </pre> 
+ @param jtIdentifier jobTracker identifier, or null
+ @param jobId job number, or null
+ @return a regex pattern matching JobIDs]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[JobID represents the immutable and unique identifier for 
+ the job. JobID consists of two parts. First part 
+ represents the jobtracker identifier, so that jobID to jobtracker map 
+ is defined. For cluster setup this string is the jobtracker 
+ start time, for local setting, it is "local".
+ Second part of the JobID is the job number. <br> 
+ An example JobID is : 
+ <code>job_200707121733_0003</code> , which represents the third job 
+ running at the jobtracker started at <code>200707121733</code>. 
+ <p>
+ Applications should never construct or parse JobID strings, but rather 
+ use appropriate constructors or {@link #forName(String)} method. 
+ 
+ @see TaskID
+ @see TaskAttemptID]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.JobID -->
+  <!-- start class org.apache.hadoop.mapred.JobPriority -->
+  <class name="JobPriority" extends="java.lang.Enum"
+    abstract="false"
+    static="false" final="true" visibility="public"
+    deprecated="not deprecated">
+    <method name="values" return="org.apache.hadoop.mapred.JobPriority[]"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="valueOf" return="org.apache.hadoop.mapred.JobPriority"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="name" type="java.lang.String"/>
+    </method>
+    <doc>
+    <![CDATA[Used to describe the priority of the running job. 
+ DEFAULT : While submitting a job, if the user is not specifying priority,
+ YARN has the capability to pick the default priority as per its config.
+ Hence MapReduce can indicate such cases with this new enum.
+ UNDEFINED_PRIORITY : YARN supports priority as an integer. Hence other than
+ the five defined enums, YARN can consider other integers also. To generalize
+ such cases, this specific enum is used.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.JobPriority -->
+  <!-- start class org.apache.hadoop.mapred.JobQueueInfo -->
+  <class name="JobQueueInfo" extends="org.apache.hadoop.mapreduce.QueueInfo"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="JobQueueInfo"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Default constructor for Job Queue Info.]]>
+      </doc>
+    </constructor>
+    <constructor name="JobQueueInfo" type="java.lang.String, java.lang.String"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Construct a new JobQueueInfo object using the queue name and the
+ scheduling information passed.
+ 
+ @param queueName Name of the job queue
+ @param schedulingInfo Scheduling Information associated with the job
+ queue]]>
+      </doc>
+    </constructor>
+    <method name="getQueueState" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Use getState() instead]]>
+      </doc>
+    </method>
+    <method name="getChildren" return="java.util.List"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <doc>
+    <![CDATA[Class that contains the information regarding the Job Queues which are 
+ maintained by the Hadoop Map/Reduce framework.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.JobQueueInfo -->
+  <!-- start class org.apache.hadoop.mapred.JobStatus -->
+  <class name="JobStatus" extends="org.apache.hadoop.mapreduce.JobStatus"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="JobStatus"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <constructor name="JobStatus" type="org.apache.hadoop.mapred.JobID, float, float, float, int"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <constructor name="JobStatus" type="org.apache.hadoop.mapred.JobID, float, float, int"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create a job status object for a given jobid.
+ @param jobid The jobid of the job
+ @param mapProgress The progress made on the maps
+ @param reduceProgress The progress made on the reduces
+ @param runState The current state of the job]]>
+      </doc>
+    </constructor>
+    <constructor name="JobStatus" type="org.apache.hadoop.mapred.JobID, float, float, float, int, org.apache.hadoop.mapred.JobPriority"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create a job status object for a given jobid.
+ @param jobid The jobid of the job
+ @param mapProgress The progress made on the maps
+ @param reduceProgress The progress made on the reduces
+ @param runState The current state of the job
+ @param jp Priority of the job.]]>
+      </doc>
+    </constructor>
+    <constructor name="JobStatus" type="org.apache.hadoop.mapred.JobID, float, float, float, float, int, org.apache.hadoop.mapred.JobPriority"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create a job status object for a given jobid.
+ @param jobid The jobid of the job
+ @param setupProgress The progress made on the setup
+ @param mapProgress The progress made on the maps
+ @param reduceProgress The progress made on the reduces
+ @param cleanupProgress The progress made on the cleanup
+ @param runState The current state of the job
+ @param jp Priority of the job.]]>
+      </doc>
+    </constructor>
+    <constructor name="JobStatus" type="org.apache.hadoop.mapred.JobID, float, float, float, int, java.lang.String, java.lang.String, java.lang.String, java.lang.String"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create a job status object for a given jobid.
+ @param jobid The jobid of the job
+ @param mapProgress The progress made on the maps
+ @param reduceProgress The progress made on the reduces
+ @param cleanupProgress The progress made on cleanup
+ @param runState The current state of the job
+ @param user userid of the person who submitted the job.
+ @param jobName user-specified job name.
+ @param jobFile job configuration file. 
+ @param trackingUrl link to the web-ui for details of the job.]]>
+      </doc>
+    </constructor>
+    <constructor name="JobStatus" type="org.apache.hadoop.mapred.JobID, float, float, int, java.lang.String, java.lang.String, java.lang.String, java.lang.String"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create a job status object for a given jobid.
+ @param jobid The jobid of the job
+ @param mapProgress The progress made on the maps
+ @param reduceProgress The progress made on the reduces
+ @param runState The current state of the job
+ @param user userid of the person who submitted the job.
+ @param jobName user-specified job name.
+ @param jobFile job configuration file. 
+ @param trackingUrl link to the web-ui for details of the job.]]>
+      </doc>
+    </constructor>
+    <constructor name="JobStatus" type="org.apache.hadoop.mapred.JobID, float, float, float, int, org.apache.hadoop.mapred.JobPriority, java.lang.String, java.lang.String, java.lang.String, java.lang.String"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create a job status object for a given jobid.
+ @param jobid The jobid of the job
+ @param mapProgress The progress made on the maps
+ @param reduceProgress The progress made on the reduces
+ @param runState The current state of the job
+ @param jp Priority of the job.
+ @param user userid of the person who submitted the job.
+ @param jobName user-specified job name.
+ @param jobFile job configuration file. 
+ @param trackingUrl link to the web-ui for details of the job.]]>
+      </doc>
+    </constructor>
+    <constructor name="JobStatus" type="org.apache.hadoop.mapred.JobID, float, float, float, float, int, org.apache.hadoop.mapred.JobPriority, java.lang.String, java.lang.String, java.lang.String, java.lang.String"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create a job status object for a given jobid.
+ @param jobid The jobid of the job
+ @param setupProgress The progress made on the setup
+ @param mapProgress The progress made on the maps
+ @param reduceProgress The progress made on the reduces
+ @param cleanupProgress The progress made on the cleanup
+ @param runState The current state of the job
+ @param jp Priority of the job.
+ @param user userid of the person who submitted the job.
+ @param jobName user-specified job name.
+ @param jobFile job configuration file.
+ @param trackingUrl link to the web-ui for details of the job.]]>
+      </doc>
+    </constructor>
+    <constructor name="JobStatus" type="org.apache.hadoop.mapred.JobID, float, float, float, float, int, org.apache.hadoop.mapred.JobPriority, java.lang.String, java.lang.String, java.lang.String, java.lang.String, boolean"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create a job status object for a given jobid.
+ @param jobid The jobid of the job
+ @param setupProgress The progress made on the setup
+ @param mapProgress The progress made on the maps
+ @param reduceProgress The progress made on the reduces
+ @param cleanupProgress The progress made on the cleanup
+ @param runState The current state of the job
+ @param jp Priority of the job.
+ @param user userid of the person who submitted the job.
+ @param jobName user-specified job name.
+ @param jobFile job configuration file.
+ @param trackingUrl link to the web-ui for details of the job.
+ @param isUber Whether job running in uber mode]]>
+      </doc>
+    </constructor>
+    <constructor name="JobStatus" type="org.apache.hadoop.mapred.JobID, float, float, float, float, int, org.apache.hadoop.mapred.JobPriority, java.lang.String, java.lang.String, java.lang.String, java.lang.String, boolean, java.lang.String"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create a job status object for a given jobid.
+ @param jobid The jobid of the job
+ @param setupProgress The progress made on the setup
+ @param mapProgress The progress made on the maps
+ @param reduceProgress The progress made on the reduces
+ @param cleanupProgress The progress made on the cleanup
+ @param runState The current state of the job
+ @param jp Priority of the job.
+ @param user userid of the person who submitted the job.
+ @param jobName user-specified job name.
+ @param jobFile job configuration file.
+ @param trackingUrl link to the web-ui for details of the job.
+ @param isUber Whether job running in uber mode
+ @param historyFile history file]]>
+      </doc>
+    </constructor>
+    <constructor name="JobStatus" type="org.apache.hadoop.mapred.JobID, float, float, float, float, int, org.apache.hadoop.mapred.JobPriority, java.lang.String, java.lang.String, java.lang.String, java.lang.String, java.lang.String"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create a job status object for a given jobid.
+ @param jobid The jobid of the job
+ @param setupProgress The progress made on the setup
+ @param mapProgress The progress made on the maps
+ @param reduceProgress The progress made on the reduces
+ @param cleanupProgress The progress made on the cleanup
+ @param runState The current state of the job
+ @param jp Priority of the job.
+ @param user userid of the person who submitted the job.
+ @param jobName user-specified job name.
+ @param queue job queue name.
+ @param jobFile job configuration file.
+ @param trackingUrl link to the web-ui for details of the job.]]>
+      </doc>
+    </constructor>
+    <constructor name="JobStatus" type="org.apache.hadoop.mapred.JobID, float, float, float, float, int, org.apache.hadoop.mapred.JobPriority, java.lang.String, java.lang.String, java.lang.String, java.lang.String, java.lang.String, boolean"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create a job status object for a given jobid.
+ @param jobid The jobid of the job
+ @param setupProgress The progress made on the setup
+ @param mapProgress The progress made on the maps
+ @param reduceProgress The progress made on the reduces
+ @param cleanupProgress The progress made on the cleanup
+ @param runState The current state of the job
+ @param jp Priority of the job.
+ @param user userid of the person who submitted the job.
+ @param jobName user-specified job name.
+ @param queue job queue name.
+ @param jobFile job configuration file. 
+ @param trackingUrl link to the web-ui for details of the job.
+ @param isUber Whether job running in uber mode]]>
+      </doc>
+    </constructor>
+    <constructor name="JobStatus" type="org.apache.hadoop.mapred.JobID, float, float, float, float, int, org.apache.hadoop.mapred.JobPriority, java.lang.String, java.lang.String, java.lang.String, java.lang.String, java.lang.String, boolean, java.lang.String"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create a job status object for a given jobid.
+ @param jobid The jobid of the job
+ @param setupProgress The progress made on the setup
+ @param mapProgress The progress made on the maps
+ @param reduceProgress The progress made on the reduces
+ @param cleanupProgress The progress made on the cleanup
+ @param runState The current state of the job
+ @param jp Priority of the job.
+ @param user userid of the person who submitted the job.
+ @param jobName user-specified job name.
+ @param queue job queue name.
+ @param jobFile job configuration file.
+ @param trackingUrl link to the web-ui for details of the job.
+ @param isUber Whether job running in uber mode
+ @param historyFile history file]]>
+      </doc>
+    </constructor>
+    <method name="getJobRunState" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="state" type="int"/>
+      <doc>
+      <![CDATA[Helper method to get human-readable state of the job.
+ @param state job state
+ @return human-readable state of the job]]>
+      </doc>
+    </method>
+    <method name="downgrade" return="org.apache.hadoop.mapred.JobStatus"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="stat" type="org.apache.hadoop.mapreduce.JobStatus"/>
+    </method>
+    <method name="getJobId" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="use getJobID instead">
+      <doc>
+      <![CDATA[@deprecated use getJobID instead]]>
+      </doc>
+    </method>
+    <method name="getJobID" return="org.apache.hadoop.mapred.JobID"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[@return The jobid of the Job]]>
+      </doc>
+    </method>
+    <method name="getJobPriority" return="org.apache.hadoop.mapred.JobPriority"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Return the priority of the job
+ @return job priority]]>
+      </doc>
+    </method>
+    <method name="setMapProgress"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="p" type="float"/>
+      <doc>
+      <![CDATA[Sets the map progress of this job
+ @param p The value of map progress to set to]]>
+      </doc>
+    </method>
+    <method name="setCleanupProgress"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="p" type="float"/>
+      <doc>
+      <![CDATA[Sets the cleanup progress of this job
+ @param p The value of cleanup progress to set to]]>
+      </doc>
+    </method>
+    <method name="setSetupProgress"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="p" type="float"/>
+      <doc>
+      <![CDATA[Sets the setup progress of this job
+ @param p The value of setup progress to set to]]>
+      </doc>
+    </method>
+    <method name="setReduceProgress"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="p" type="float"/>
+      <doc>
+      <![CDATA[Sets the reduce progress of this Job
+ @param p The value of reduce progress to set to]]>
+      </doc>
+    </method>
+    <method name="setFinishTime"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="finishTime" type="long"/>
+      <doc>
+      <![CDATA[Set the finish time of the job
+ @param finishTime The finishTime of the job]]>
+      </doc>
+    </method>
+    <method name="setHistoryFile"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="historyFile" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the job history file url for a completed job]]>
+      </doc>
+    </method>
+    <method name="setTrackingUrl"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="trackingUrl" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the link to the web-ui for details of the job.]]>
+      </doc>
+    </method>
+    <method name="setRetired"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Set the job retire flag to true.]]>
+      </doc>
+    </method>
+    <method name="getRunState" return="int"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[@return running state of the job]]>
+      </doc>
+    </method>
+    <method name="setStartTime"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="startTime" type="long"/>
+      <doc>
+      <![CDATA[Set the start time of the job
+ @param startTime The startTime of the job]]>
+      </doc>
+    </method>
+    <method name="setUsername"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="userName" type="java.lang.String"/>
+      <doc>
+      <![CDATA[@param userName The username of the job]]>
+      </doc>
+    </method>
+    <method name="setJobACLs"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="acls" type="java.util.Map"/>
+    </method>
+    <method name="setFailureInfo"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="failureInfo" type="java.lang.String"/>
+    </method>
+    <method name="setJobPriority"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jp" type="org.apache.hadoop.mapred.JobPriority"/>
+      <doc>
+      <![CDATA[Set the priority of the job, defaulting to NORMAL.
+ @param jp new job priority]]>
+      </doc>
+    </method>
+    <method name="mapProgress" return="float"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[@return Percentage of progress in maps]]>
+      </doc>
+    </method>
+    <method name="cleanupProgress" return="float"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[@return Percentage of progress in cleanup]]>
+      </doc>
+    </method>
+    <method name="setupProgress" return="float"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[@return Percentage of progress in setup]]>
+      </doc>
+    </method>
+    <method name="reduceProgress" return="float"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[@return Percentage of progress in reduce]]>
+      </doc>
+    </method>
+    <field name="RUNNING" type="int"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="SUCCEEDED" type="int"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="FAILED" type="int"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="PREP" type="int"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="KILLED" type="int"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <doc>
+    <![CDATA[Describes the current status of a job.  This is
+ not intended to be a comprehensive piece of data.
+ For that, look at JobProfile.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.JobStatus -->
+  <!-- start class org.apache.hadoop.mapred.KeyValueLineRecordReader -->
+  <class name="KeyValueLineRecordReader" extends="java.lang.Object"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapred.RecordReader"/>
+    <constructor name="KeyValueLineRecordReader" type="org.apache.hadoop.conf.Configuration, org.apache.hadoop.mapred.FileSplit"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </constructor>
+    <method name="getKeyClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="createKey" return="org.apache.hadoop.io.Text"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="createValue" return="org.apache.hadoop.io.Text"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="findSeparator" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="utf" type="byte[]"/>
+      <param name="start" type="int"/>
+      <param name="length" type="int"/>
+      <param name="sep" type="byte"/>
+    </method>
+    <method name="next" return="boolean"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="key" type="org.apache.hadoop.io.Text"/>
+      <param name="value" type="org.apache.hadoop.io.Text"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Read key/value pair in a line.]]>
+      </doc>
+    </method>
+    <method name="getProgress" return="float"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getPos" return="long"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="close"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <doc>
+    <![CDATA[This class treats a line in the input as a key/value pair separated by a 
+ separator character. The separator can be specified in config file 
+ under the attribute name mapreduce.input.keyvaluelinerecordreader.key.value.separator. The default
+ separator is the tab character ('\t').]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.KeyValueLineRecordReader -->
+  <!-- start class org.apache.hadoop.mapred.KeyValueTextInputFormat -->
+  <class name="KeyValueTextInputFormat" extends="org.apache.hadoop.mapred.FileInputFormat"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapred.JobConfigurable"/>
+    <constructor name="KeyValueTextInputFormat"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="configure"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+    </method>
+    <method name="isSplitable" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="fs" type="org.apache.hadoop.fs.FileSystem"/>
+      <param name="file" type="org.apache.hadoop.fs.Path"/>
+    </method>
+    <method name="getRecordReader" return="org.apache.hadoop.mapred.RecordReader"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="genericSplit" type="org.apache.hadoop.mapred.InputSplit"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="reporter" type="org.apache.hadoop.mapred.Reporter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <doc>
+    <![CDATA[An {@link InputFormat} for plain text files. Files are broken into lines.
+ Either linefeed or carriage-return are used to signal end of line. Each line
+ is divided into key and value parts by a separator byte. If no such a byte
+ exists, the key will be the entire line and value will be empty.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.KeyValueTextInputFormat -->
+  <!-- start class org.apache.hadoop.mapred.MapFileOutputFormat -->
+  <class name="MapFileOutputFormat" extends="org.apache.hadoop.mapred.FileOutputFormat"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="MapFileOutputFormat"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getRecordWriter" return="org.apache.hadoop.mapred.RecordWriter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="ignored" type="org.apache.hadoop.fs.FileSystem"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="name" type="java.lang.String"/>
+      <param name="progress" type="org.apache.hadoop.util.Progressable"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getReaders" return="org.apache.hadoop.io.MapFile.Reader[]"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="ignored" type="org.apache.hadoop.fs.FileSystem"/>
+      <param name="dir" type="org.apache.hadoop.fs.Path"/>
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Open the output generated by this format.]]>
+      </doc>
+    </method>
+    <method name="getEntry" return="org.apache.hadoop.io.Writable"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="readers" type="org.apache.hadoop.io.MapFile.Reader[]"/>
+      <param name="partitioner" type="org.apache.hadoop.mapred.Partitioner"/>
+      <param name="key" type="K"/>
+      <param name="value" type="V"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get an entry from output generated by this class.]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[An {@link OutputFormat} that writes {@link MapFile}s.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.MapFileOutputFormat -->
+  <!-- start interface org.apache.hadoop.mapred.Mapper -->
+  <interface name="Mapper"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapred.JobConfigurable"/>
+    <implements name="org.apache.hadoop.io.Closeable"/>
+    <method name="map"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="key" type="K1"/>
+      <param name="value" type="V1"/>
+      <param name="output" type="org.apache.hadoop.mapred.OutputCollector"/>
+      <param name="reporter" type="org.apache.hadoop.mapred.Reporter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Maps a single input key/value pair into an intermediate key/value pair.
+ 
+ <p>Output pairs need not be of the same types as input pairs.  A given 
+ input pair may map to zero or many output pairs.  Output pairs are 
+ collected with calls to 
+ {@link OutputCollector#collect(Object,Object)}.</p>
+
+ <p>Applications can use the {@link Reporter} provided to report progress 
+ or just indicate that they are alive. In scenarios where the application 
+ takes significant amount of time to process individual key/value
+ pairs, this is crucial since the framework might assume that the task has 
+ timed-out and kill that task. The other way of avoiding this is to set 
+ <a href="{@docRoot}/../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml#mapreduce.task.timeout">
+ mapreduce.task.timeout</a> to a high-enough value (or even zero for no 
+ time-outs).</p>
+ 
+ @param key the input key.
+ @param value the input value.
+ @param output collects mapped keys and values.
+ @param reporter facility to report progress.]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Maps input key/value pairs to a set of intermediate key/value pairs.  
+ 
+ <p>Maps are the individual tasks which transform input records into a 
+ intermediate records. The transformed intermediate records need not be of 
+ the same type as the input records. A given input pair may map to zero or 
+ many output pairs.</p> 
+ 
+ <p>The Hadoop Map-Reduce framework spawns one map task for each 
+ {@link InputSplit} generated by the {@link InputFormat} for the job.
+ <code>Mapper</code> implementations can access the {@link JobConf} for the 
+ job via the {@link JobConfigurable#configure(JobConf)} and initialize
+ themselves. Similarly they can use the {@link Closeable#close()} method for
+ de-initialization.</p>
+ 
+ <p>The framework then calls 
+ {@link #map(Object, Object, OutputCollector, Reporter)} 
+ for each key/value pair in the <code>InputSplit</code> for that task.</p>
+ 
+ <p>All intermediate values associated with a given output key are 
+ subsequently grouped by the framework, and passed to a {@link Reducer} to  
+ determine the final output. Users can control the grouping by specifying
+ a <code>Comparator</code> via 
+ {@link JobConf#setOutputKeyComparatorClass(Class)}.</p>
+
+ <p>The grouped <code>Mapper</code> outputs are partitioned per 
+ <code>Reducer</code>. Users can control which keys (and hence records) go to 
+ which <code>Reducer</code> by implementing a custom {@link Partitioner}.
+ 
+ <p>Users can optionally specify a <code>combiner</code>, via 
+ {@link JobConf#setCombinerClass(Class)}, to perform local aggregation of the 
+ intermediate outputs, which helps to cut down the amount of data transferred 
+ from the <code>Mapper</code> to the <code>Reducer</code>.
+ 
+ <p>The intermediate, grouped outputs are always stored in 
+ {@link SequenceFile}s. Applications can specify if and how the intermediate
+ outputs are to be compressed and which {@link CompressionCodec}s are to be
+ used via the <code>JobConf</code>.</p>
+  
+ <p>If the job has 
+ <a href="{@docRoot}/org/apache/hadoop/mapred/JobConf.html#ReducerNone">zero
+ reduces</a> then the output of the <code>Mapper</code> is directly written
+ to the {@link FileSystem} without grouping by keys.</p>
+ 
+ <p>Example:</p>
+ <p><blockquote><pre>
+     public class MyMapper&lt;K extends WritableComparable, V extends Writable&gt; 
+     extends MapReduceBase implements Mapper&lt;K, V, K, V&gt; {
+     
+       static enum MyCounters { NUM_RECORDS }
+       
+       private String mapTaskId;
+       private String inputFile;
+       private int noRecords = 0;
+       
+       public void configure(JobConf job) {
+         mapTaskId = job.get(JobContext.TASK_ATTEMPT_ID);
+         inputFile = job.get(JobContext.MAP_INPUT_FILE);
+       }
+       
+       public void map(K key, V val,
+                       OutputCollector&lt;K, V&gt; output, Reporter reporter)
+       throws IOException {
+         // Process the &lt;key, value&gt; pair (assume this takes a while)
+         // ...
+         // ...
+         
+         // Let the framework know that we are alive, and kicking!
+         // reporter.progress();
+         
+         // Process some more
+         // ...
+         // ...
+         
+         // Increment the no. of &lt;key, value&gt; pairs processed
+         ++noRecords;
+
+         // Increment counters
+         reporter.incrCounter(NUM_RECORDS, 1);
+        
+         // Every 100 records update application-level status
+         if ((noRecords%100) == 0) {
+           reporter.setStatus(mapTaskId + " processed " + noRecords + 
+                              " from input-file: " + inputFile); 
+         }
+         
+         // Output the result
+         output.collect(key, val);
+       }
+     }
+ </pre></blockquote>
+
+ <p>Applications may write a custom {@link MapRunnable} to exert greater
+ control on map processing e.g. multi-threaded <code>Mapper</code>s etc.</p>
+ 
+ @see JobConf
+ @see InputFormat
+ @see Partitioner  
+ @see Reducer
+ @see MapReduceBase
+ @see MapRunnable
+ @see SequenceFile]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.Mapper -->
+  <!-- start class org.apache.hadoop.mapred.MapReduceBase -->
+  <class name="MapReduceBase" extends="java.lang.Object"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.io.Closeable"/>
+    <implements name="org.apache.hadoop.mapred.JobConfigurable"/>
+    <constructor name="MapReduceBase"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="close"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Default implementation that does nothing.]]>
+      </doc>
+    </method>
+    <method name="configure"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <doc>
+      <![CDATA[Default implementation that does nothing.]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Base class for {@link Mapper} and {@link Reducer} implementations.
+ 
+ <p>Provides default no-op implementations for a few methods, most non-trivial
+ applications need to override some of them.</p>]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.MapReduceBase -->
+  <!-- start interface org.apache.hadoop.mapred.MapRunnable -->
+  <interface name="MapRunnable"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapred.JobConfigurable"/>
+    <method name="run"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="input" type="org.apache.hadoop.mapred.RecordReader"/>
+      <param name="output" type="org.apache.hadoop.mapred.OutputCollector"/>
+      <param name="reporter" type="org.apache.hadoop.mapred.Reporter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Start mapping input <tt>&lt;key, value&gt;</tt> pairs.
+  
+ <p>Mapping of input records to output records is complete when this method 
+ returns.</p>
+ 
+ @param input the {@link RecordReader} to read the input records.
+ @param output the {@link OutputCollector} to collect the outputrecords.
+ @param reporter {@link Reporter} to report progress, status-updates etc.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Expert: Generic interface for {@link Mapper}s.
+ 
+ <p>Custom implementations of <code>MapRunnable</code> can exert greater 
+ control on map processing e.g. multi-threaded, asynchronous mappers etc.</p>
+ 
+ @see Mapper]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.MapRunnable -->
+  <!-- start class org.apache.hadoop.mapred.MapRunner -->
+  <class name="MapRunner" extends="java.lang.Object"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapred.MapRunnable"/>
+    <constructor name="MapRunner"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="configure"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+    </method>
+    <method name="run"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="input" type="org.apache.hadoop.mapred.RecordReader"/>
+      <param name="output" type="org.apache.hadoop.mapred.OutputCollector"/>
+      <param name="reporter" type="org.apache.hadoop.mapred.Reporter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getMapper" return="org.apache.hadoop.mapred.Mapper"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+    </method>
+    <doc>
+    <![CDATA[Default {@link MapRunnable} implementation.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.MapRunner -->
+  <!-- start class org.apache.hadoop.mapred.MultiFileInputFormat -->
+  <class name="MultiFileInputFormat" extends="org.apache.hadoop.mapred.FileInputFormat"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="MultiFileInputFormat"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getSplits" return="org.apache.hadoop.mapred.InputSplit[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="numSplits" type="int"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getRecordReader" return="org.apache.hadoop.mapred.RecordReader"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="split" type="org.apache.hadoop.mapred.InputSplit"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="reporter" type="org.apache.hadoop.mapred.Reporter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <doc>
+    <![CDATA[An abstract {@link InputFormat} that returns {@link MultiFileSplit}'s
+ in {@link #getSplits(JobConf, int)} method. Splits are constructed from 
+ the files under the input paths. Each split returned contains <i>nearly</i>
+ equal content length. <br>  
+ Subclasses implement {@link #getRecordReader(InputSplit, JobConf, Reporter)}
+ to construct <code>RecordReader</code>'s for <code>MultiFileSplit</code>'s.
+ @see MultiFileSplit]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.MultiFileInputFormat -->
+  <!-- start class org.apache.hadoop.mapred.MultiFileSplit -->
+  <class name="MultiFileSplit" extends="org.apache.hadoop.mapred.lib.CombineFileSplit"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="MultiFileSplit" type="org.apache.hadoop.mapred.JobConf, org.apache.hadoop.fs.Path[], long[]"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getLocations" return="java.lang.String[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="toString" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <doc>
+    <![CDATA[A sub-collection of input files. Unlike {@link FileSplit}, MultiFileSplit 
+ class does not represent a split of a file, but a split of input files 
+ into smaller sets. The atomic unit of split is a file. <br> 
+ MultiFileSplit can be used to implement {@link RecordReader}'s, with 
+ reading one record per file.
+ @see FileSplit
+ @see MultiFileInputFormat]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.MultiFileSplit -->
+  <!-- start interface org.apache.hadoop.mapred.OutputCollector -->
+  <interface name="OutputCollector"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="collect"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="key" type="K"/>
+      <param name="value" type="V"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Adds a key/value pair to the output.
+
+ @param key the key to collect.
+ @param value to value to collect.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Collects the <code>&lt;key, value&gt;</code> pairs output by {@link Mapper}s
+ and {@link Reducer}s.
+  
+ <p><code>OutputCollector</code> is the generalization of the facility 
+ provided by the Map-Reduce framework to collect data output by either the 
+ <code>Mapper</code> or the <code>Reducer</code> i.e. intermediate outputs 
+ or the output of the job.</p>]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.OutputCollector -->
+  <!-- start class org.apache.hadoop.mapred.OutputCommitter -->
+  <class name="OutputCommitter" extends="org.apache.hadoop.mapreduce.OutputCommitter"
+    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="OutputCommitter"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="setupJob"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobContext" type="org.apache.hadoop.mapred.JobContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[For the framework to setup the job output during initialization.  This is
+ called from the application master process for the entire job. This will be
+ called multiple times, once per job attempt.
+ 
+ @param jobContext Context of the job whose output is being written.
+ @throws IOException if temporary output could not be created]]>
+      </doc>
+    </method>
+    <method name="cleanupJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="Use {@link #commitJob(JobContext)} or 
+                 {@link #abortJob(JobContext, int)} instead.">
+      <param name="jobContext" type="org.apache.hadoop.mapred.JobContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[For cleaning up the job's output after job completion.  This is called
+ from the application master process for the entire job. This may be called
+ multiple times.
+ 
+ @param jobContext Context of the job whose output is being written.
+ @throws IOException
+ @deprecated Use {@link #commitJob(JobContext)} or 
+                 {@link #abortJob(JobContext, int)} instead.]]>
+      </doc>
+    </method>
+    <method name="commitJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobContext" type="org.apache.hadoop.mapred.JobContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[For committing job's output after successful job completion. Note that this
+ is invoked for jobs with final runstate as SUCCESSFUL.  This is called
+ from the application master process for the entire job. This is guaranteed
+ to only be called once.  If it throws an exception the entire job will
+ fail.
+ 
+ @param jobContext Context of the job whose output is being written.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="abortJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobContext" type="org.apache.hadoop.mapred.JobContext"/>
+      <param name="status" type="int"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[For aborting an unsuccessful job's output. Note that this is invoked for 
+ jobs with final runstate as {@link JobStatus#FAILED} or 
+ {@link JobStatus#KILLED}. This is called from the application
+ master process for the entire job. This may be called multiple times.
+ 
+ @param jobContext Context of the job whose output is being written.
+ @param status final runstate of the job
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="setupTask"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="taskContext" type="org.apache.hadoop.mapred.TaskAttemptContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Sets up output for the task. This is called from each individual task's
+ process that will output to HDFS, and it is called just for that task. This
+ may be called multiple times for the same task, but for different task
+ attempts.
+ 
+ @param taskContext Context of the task whose output is being written.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="needsTaskCommit" return="boolean"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="taskContext" type="org.apache.hadoop.mapred.TaskAttemptContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Check whether task needs a commit.  This is called from each individual
+ task's process that will output to HDFS, and it is called just for that
+ task.
+ 
+ @param taskContext
+ @return true/false
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="commitTask"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="taskContext" type="org.apache.hadoop.mapred.TaskAttemptContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[To promote the task's temporary output to final output location.
+ If {@link #needsTaskCommit(TaskAttemptContext)} returns true and this
+ task is the task that the AM determines finished first, this method
+ is called to commit an individual task's output.  This is to mark
+ that tasks output as complete, as {@link #commitJob(JobContext)} will 
+ also be called later on if the entire job finished successfully. This
+ is called from a task's process. This may be called multiple times for the
+ same task, but different task attempts.  It should be very rare for this to
+ be called multiple times and requires odd networking failures to make this
+ happen. In the future the Hadoop framework may eliminate this race.
+ 
+ @param taskContext Context of the task whose output is being written.
+ @throws IOException if commit is not]]>
+      </doc>
+    </method>
+    <method name="abortTask"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="taskContext" type="org.apache.hadoop.mapred.TaskAttemptContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Discard the task output. This is called from a task's process to clean 
+ up a single task's output that can not yet been committed. This may be
+ called multiple times for the same task, but for different task attempts.
+ 
+ @param taskContext
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="isRecoverySupported" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="Use {@link #isRecoverySupported(JobContext)} instead.">
+      <doc>
+      <![CDATA[This method implements the new interface by calling the old method. Note
+ that the input types are different between the new and old apis and this is
+ a bridge between the two.
+ 
+ @deprecated Use {@link #isRecoverySupported(JobContext)} instead.]]>
+      </doc>
+    </method>
+    <method name="isRecoverySupported" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobContext" type="org.apache.hadoop.mapred.JobContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Is task output recovery supported for restarting jobs?
+ 
+ If task output recovery is supported, job restart can be done more
+ efficiently.
+
+ @param jobContext
+          Context of the job whose output is being written.
+ @return <code>true</code> if task output recovery is supported,
+         <code>false</code> otherwise
+ @throws IOException
+ @see #recoverTask(TaskAttemptContext)]]>
+      </doc>
+    </method>
+    <method name="isCommitJobRepeatable" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobContext" type="org.apache.hadoop.mapred.JobContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Returns true if an in-progress job commit can be retried. If the MR AM is
+ re-run then it will check this value to determine if it can retry an
+ in-progress commit that was started by a previous version.
+ Note that in rare scenarios, the previous AM version might still be running
+ at that time, due to system anomalies. Hence if this method returns true
+ then the retry commit operation should be able to run concurrently with
+ the previous operation.
+
+ If repeatable job commit is supported, job restart can tolerate previous
+ AM failures during job commit.
+
+ By default, it is not supported. Extended classes (like:
+ FileOutputCommitter) should explicitly override it if provide support.
+
+ @param jobContext
+          Context of the job whose output is being written.
+ @return <code>true</code> repeatable job commit is supported,
+         <code>false</code> otherwise
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="isCommitJobRepeatable" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobContext" type="org.apache.hadoop.mapreduce.JobContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="recoverTask"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="taskContext" type="org.apache.hadoop.mapred.TaskAttemptContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Recover the task output. 
+ 
+ The retry-count for the job will be passed via the 
+ {@link MRConstants#APPLICATION_ATTEMPT_ID} key in  
+ {@link TaskAttemptContext#getConfiguration()} for the 
+ <code>OutputCommitter</code>. This is called from the application master
+ process, but it is called individually for each task.
+ 
+ If an exception is thrown the task will be attempted again. 
+ 
+ @param taskContext Context of the task whose output is being recovered
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="setupJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="true" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobContext" type="org.apache.hadoop.mapreduce.JobContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[This method implements the new interface by calling the old method. Note
+ that the input types are different between the new and old apis and this
+ is a bridge between the two.]]>
+      </doc>
+    </method>
+    <method name="cleanupJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="true" visibility="public"
+      deprecated="Use {@link #commitJob(org.apache.hadoop.mapreduce.JobContext)}
+             or {@link #abortJob(org.apache.hadoop.mapreduce.JobContext, org.apache.hadoop.mapreduce.JobStatus.State)}
+             instead.">
+      <param name="context" type="org.apache.hadoop.mapreduce.JobContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[This method implements the new interface by calling the old method. Note
+ that the input types are different between the new and old apis and this
+ is a bridge between the two.
+ @deprecated Use {@link #commitJob(org.apache.hadoop.mapreduce.JobContext)}
+             or {@link #abortJob(org.apache.hadoop.mapreduce.JobContext, org.apache.hadoop.mapreduce.JobStatus.State)}
+             instead.]]>
+      </doc>
+    </method>
+    <method name="commitJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="true" visibility="public"
+      deprecated="not deprecated">
+      <param name="context" type="org.apache.hadoop.mapreduce.JobContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[This method implements the new interface by calling the old method. Note
+ that the input types are different between the new and old apis and this
+ is a bridge between the two.]]>
+      </doc>
+    </method>
+    <method name="abortJob"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="true" visibility="public"
+      deprecated="not deprecated">
+      <param name="context" type="org.apache.hadoop.mapreduce.JobContext"/>
+      <param name="runState" type="org.apache.hadoop.mapreduce.JobStatus.State"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[This method implements the new interface by calling the old method. Note
+ that the input types are different between the new and old apis and this
+ is a bridge between the two.]]>
+      </doc>
+    </method>
+    <method name="setupTask"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="true" visibility="public"
+      deprecated="not deprecated">
+      <param name="taskContext" type="org.apache.hadoop.mapreduce.TaskAttemptContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[This method implements the new interface by calling the old method. Note
+ that the input types are different between the new and old apis and this
+ is a bridge between the two.]]>
+      </doc>
+    </method>
+    <method name="needsTaskCommit" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="true" visibility="public"
+      deprecated="not deprecated">
+      <param name="taskContext" type="org.apache.hadoop.mapreduce.TaskAttemptContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[This method implements the new interface by calling the old method. Note
+ that the input types are different between the new and old apis and this
+ is a bridge between the two.]]>
+      </doc>
+    </method>
+    <method name="commitTask"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="true" visibility="public"
+      deprecated="not deprecated">
+      <param name="taskContext" type="org.apache.hadoop.mapreduce.TaskAttemptContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[This method implements the new interface by calling the old method. Note
+ that the input types are different between the new and old apis and this
+ is a bridge between the two.]]>
+      </doc>
+    </method>
+    <method name="abortTask"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="true" visibility="public"
+      deprecated="not deprecated">
+      <param name="taskContext" type="org.apache.hadoop.mapreduce.TaskAttemptContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[This method implements the new interface by calling the old method. Note
+ that the input types are different between the new and old apis and this
+ is a bridge between the two.]]>
+      </doc>
+    </method>
+    <method name="recoverTask"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="true" visibility="public"
+      deprecated="not deprecated">
+      <param name="taskContext" type="org.apache.hadoop.mapreduce.TaskAttemptContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[This method implements the new interface by calling the old method. Note
+ that the input types are different between the new and old apis and this
+ is a bridge between the two.]]>
+      </doc>
+    </method>
+    <method name="isRecoverySupported" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="true" visibility="public"
+      deprecated="not deprecated">
+      <param name="context" type="org.apache.hadoop.mapreduce.JobContext"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[This method implements the new interface by calling the old method. Note
+ that the input types are different between the new and old apis and this is
+ a bridge between the two.]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[<code>OutputCommitter</code> describes the commit of task output for a 
+ Map-Reduce job.
+
+ <p>The Map-Reduce framework relies on the <code>OutputCommitter</code> of 
+ the job to:<p>
+ <ol>
+   <li>
+   Setup the job during initialization. For example, create the temporary 
+   output directory for the job during the initialization of the job.
+   </li>
+   <li>
+   Cleanup the job after the job completion. For example, remove the
+   temporary output directory after the job completion. 
+   </li>
+   <li>
+   Setup the task temporary output.
+   </li> 
+   <li>
+   Check whether a task needs a commit. This is to avoid the commit
+   procedure if a task does not need commit.
+   </li>
+   <li>
+   Commit of the task output.
+   </li>  
+   <li>
+   Discard the task commit.
+   </li>
+ </ol>
+ The methods in this class can be called from several different processes and
+ from several different contexts.  It is important to know which process and
+ which context each is called from.  Each method should be marked accordingly
+ in its documentation.  It is also important to note that not all methods are
+ guaranteed to be called once and only once.  If a method is not guaranteed to
+ have this property the output committer needs to handle this appropriately. 
+ Also note it will only be in rare situations where they may be called 
+ multiple times for the same task.
+ 
+ @see FileOutputCommitter 
+ @see JobContext
+ @see TaskAttemptContext]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.OutputCommitter -->
+  <!-- start interface org.apache.hadoop.mapred.OutputFormat -->
+  <interface name="OutputFormat"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="getRecordWriter" return="org.apache.hadoop.mapred.RecordWriter"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="ignored" type="org.apache.hadoop.fs.FileSystem"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="name" type="java.lang.String"/>
+      <param name="progress" type="org.apache.hadoop.util.Progressable"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get the {@link RecordWriter} for the given job.
+
+ @param ignored
+ @param job configuration for the job whose output is being written.
+ @param name the unique name for this part of the output.
+ @param progress mechanism for reporting progress while writing to file.
+ @return a {@link RecordWriter} to write the output for the job.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="checkOutputSpecs"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="ignored" type="org.apache.hadoop.fs.FileSystem"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Check for validity of the output-specification for the job.
+  
+ <p>This is to validate the output specification for the job when it is
+ a job is submitted.  Typically checks that it does not already exist,
+ throwing an exception when it already exists, so that output is not
+ overwritten.</p>
+
+ @param ignored
+ @param job job configuration.
+ @throws IOException when output should not be attempted]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[<code>OutputFormat</code> describes the output-specification for a 
+ Map-Reduce job.
+
+ <p>The Map-Reduce framework relies on the <code>OutputFormat</code> of the
+ job to:<p>
+ <ol>
+   <li>
+   Validate the output-specification of the job. For e.g. check that the 
+   output directory doesn't already exist. 
+   <li>
+   Provide the {@link RecordWriter} implementation to be used to write out
+   the output files of the job. Output files are stored in a 
+   {@link FileSystem}.
+   </li>
+ </ol>
+ 
+ @see RecordWriter
+ @see JobConf]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.OutputFormat -->
+  <!-- start class org.apache.hadoop.mapred.OutputLogFilter -->
+  <class name="OutputLogFilter" extends="java.lang.Object"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.fs.PathFilter"/>
+    <constructor name="OutputLogFilter"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="accept" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="path" type="org.apache.hadoop.fs.Path"/>
+    </method>
+    <doc>
+    <![CDATA[This class filters log files from directory given
+ It doesnt accept paths having _logs.
+ This can be used to list paths of output directory as follows:
+   Path[] fileList = FileUtil.stat2Paths(fs.listStatus(outDir,
+                                   new OutputLogFilter()));]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.OutputLogFilter -->
+  <!-- start interface org.apache.hadoop.mapred.Partitioner -->
+  <interface name="Partitioner"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapred.JobConfigurable"/>
+    <method name="getPartition" return="int"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="key" type="K2"/>
+      <param name="value" type="V2"/>
+      <param name="numPartitions" type="int"/>
+      <doc>
+      <![CDATA[Get the paritition number for a given key (hence record) given the total 
+ number of partitions i.e. number of reduce-tasks for the job.
+   
+ <p>Typically a hash function on a all or a subset of the key.</p>
+
+ @param key the key to be paritioned.
+ @param value the entry value.
+ @param numPartitions the total number of partitions.
+ @return the partition number for the <code>key</code>.]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Partitions the key space.
+ 
+ <p><code>Partitioner</code> controls the partitioning of the keys of the 
+ intermediate map-outputs. The key (or a subset of the key) is used to derive
+ the partition, typically by a hash function. The total number of partitions
+ is the same as the number of reduce tasks for the job. Hence this controls
+ which of the <code>m</code> reduce tasks the intermediate key (and hence the 
+ record) is sent for reduction.</p>
+
+ <p>Note: A <code>Partitioner</code> is created only when there are multiple
+ reducers.</p>
+ 
+ @see Reducer]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.Partitioner -->
+  <!-- start interface org.apache.hadoop.mapred.RecordReader -->
+  <interface name="RecordReader"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="java.io.Closeable"/>
+    <method name="next" return="boolean"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="key" type="K"/>
+      <param name="value" type="V"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Reads the next key/value pair from the input for processing.
+
+ @param key the key to read data into
+ @param value the value to read data into
+ @return true iff a key/value was read, false if at EOF]]>
+      </doc>
+    </method>
+    <method name="createKey" return="K"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create an object of the appropriate type to be used as a key.
+ 
+ @return a new key object.]]>
+      </doc>
+    </method>
+    <method name="createValue" return="V"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Create an object of the appropriate type to be used as a value.
+ 
+ @return a new value object.]]>
+      </doc>
+    </method>
+    <method name="getPos" return="long"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Returns the current position in the input.
+ 
+ @return the current position in the input.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="close"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Close this {@link InputSplit} to future operations.
+ 
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getProgress" return="float"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[How much of the input has the {@link RecordReader} consumed i.e.
+ has been processed by?
+ 
+ @return progress from <code>0.0</code> to <code>1.0</code>.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[<code>RecordReader</code> reads &lt;key, value&gt; pairs from an 
+ {@link InputSplit}.
+   
+ <p><code>RecordReader</code>, typically, converts the byte-oriented view of 
+ the input, provided by the <code>InputSplit</code>, and presents a 
+ record-oriented view for the {@link Mapper} and {@link Reducer} tasks for
+ processing. It thus assumes the responsibility of processing record 
+ boundaries and presenting the tasks with keys and values.</p>
+ 
+ @see InputSplit
+ @see InputFormat]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.RecordReader -->
+  <!-- start interface org.apache.hadoop.mapred.RecordWriter -->
+  <interface name="RecordWriter"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="write"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="key" type="K"/>
+      <param name="value" type="V"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Writes a key/value pair.
+
+ @param key the key to write.
+ @param value the value to write.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="close"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="reporter" type="org.apache.hadoop.mapred.Reporter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Close this <code>RecordWriter</code> to future operations.
+ 
+ @param reporter facility to report progress.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[<code>RecordWriter</code> writes the output &lt;key, value&gt; pairs 
+ to an output file.
+ 
+ <p><code>RecordWriter</code> implementations write the job outputs to the
+ {@link FileSystem}.
+ 
+ @see OutputFormat]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.RecordWriter -->
+  <!-- start interface org.apache.hadoop.mapred.Reducer -->
+  <interface name="Reducer"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapred.JobConfigurable"/>
+    <implements name="org.apache.hadoop.io.Closeable"/>
+    <method name="reduce"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="key" type="K2"/>
+      <param name="values" type="java.util.Iterator"/>
+      <param name="output" type="org.apache.hadoop.mapred.OutputCollector"/>
+      <param name="reporter" type="org.apache.hadoop.mapred.Reporter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[<i>Reduces</i> values for a given key.  
+ 
+ <p>The framework calls this method for each 
+ <code>&lt;key, (list of values)&gt;</code> pair in the grouped inputs.
+ Output values must be of the same type as input values.  Input keys must 
+ not be altered. The framework will <b>reuse</b> the key and value objects
+ that are passed into the reduce, therefore the application should clone
+ the objects they want to keep a copy of. In many cases, all values are 
+ combined into zero or one value.
+ </p>
+   
+ <p>Output pairs are collected with calls to  
+ {@link OutputCollector#collect(Object,Object)}.</p>
+
+ <p>Applications can use the {@link Reporter} provided to report progress 
+ or just indicate that they are alive. In scenarios where the application 
+ takes a significant amount of time to process individual key/value 
+ pairs, this is crucial since the framework might assume that the task has 
+ timed-out and kill that task. The other way of avoiding this is to set 
+ <a href="{@docRoot}/../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml#mapreduce.task.timeout">
+ mapreduce.task.timeout</a> to a high-enough value (or even zero for no 
+ time-outs).</p>
+ 
+ @param key the key.
+ @param values the list of values to reduce.
+ @param output to collect keys and combined values.
+ @param reporter facility to report progress.]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[Reduces a set of intermediate values which share a key to a smaller set of
+ values.  
+ 
+ <p>The number of <code>Reducer</code>s for the job is set by the user via 
+ {@link JobConf#setNumReduceTasks(int)}. <code>Reducer</code> implementations 
+ can access the {@link JobConf} for the job via the 
+ {@link JobConfigurable#configure(JobConf)} method and initialize themselves. 
+ Similarly they can use the {@link Closeable#close()} method for
+ de-initialization.</p>
+
+ <p><code>Reducer</code> has 3 primary phases:</p>
+ <ol>
+   <li>
+   
+   <b id="Shuffle">Shuffle</b>
+   
+   <p><code>Reducer</code> is input the grouped output of a {@link Mapper}.
+   In the phase the framework, for each <code>Reducer</code>, fetches the 
+   relevant partition of the output of all the <code>Mapper</code>s, via HTTP. 
+   </p>
+   </li>
+   
+   <li>
+   <b id="Sort">Sort</b>
+   
+   <p>The framework groups <code>Reducer</code> inputs by <code>key</code>s 
+   (since different <code>Mapper</code>s may have output the same key) in this
+   stage.</p>
+   
+   <p>The shuffle and sort phases occur simultaneously i.e. while outputs are
+   being fetched they are merged.</p>
+      
+   <b id="SecondarySort">SecondarySort</b>
+   
+   <p>If equivalence rules for keys while grouping the intermediates are 
+   different from those for grouping keys before reduction, then one may 
+   specify a <code>Comparator</code> via 
+   {@link JobConf#setOutputValueGroupingComparator(Class)}.Since 
+   {@link JobConf#setOutputKeyComparatorClass(Class)} can be used to 
+   control how intermediate keys are grouped, these can be used in conjunction 
+   to simulate <i>secondary sort on values</i>.</p>
+   
+   
+   For example, say that you want to find duplicate web pages and tag them 
+   all with the url of the "best" known example. You would set up the job 
+   like:
+   <ul>
+     <li>Map Input Key: url</li>
+     <li>Map Input Value: document</li>
+     <li>Map Output Key: document checksum, url pagerank</li>
+     <li>Map Output Value: url</li>
+     <li>Partitioner: by checksum</li>
+     <li>OutputKeyComparator: by checksum and then decreasing pagerank</li>
+     <li>OutputValueGroupingComparator: by checksum</li>
+   </ul>
+   </li>
+   
+   <li>   
+   <b id="Reduce">Reduce</b>
+   
+   <p>In this phase the 
+   {@link #reduce(Object, Iterator, OutputCollector, Reporter)}
+   method is called for each <code>&lt;key, (list of values)&gt;</code> pair in
+   the grouped inputs.</p>
+   <p>The output of the reduce task is typically written to the 
+   {@link FileSystem} via 
+   {@link OutputCollector#collect(Object, Object)}.</p>
+   </li>
+ </ol>
+ 
+ <p>The output of the <code>Reducer</code> is <b>not re-sorted</b>.</p>
+ 
+ <p>Example:</p>
+ <p><blockquote><pre>
+     public class MyReducer&lt;K extends WritableComparable, V extends Writable&gt; 
+     extends MapReduceBase implements Reducer&lt;K, V, K, V&gt; {
+     
+       static enum MyCounters { NUM_RECORDS }
+        
+       private String reduceTaskId;
+       private int noKeys = 0;
+       
+       public void configure(JobConf job) {
+         reduceTaskId = job.get(JobContext.TASK_ATTEMPT_ID);
+       }
+       
+       public void reduce(K key, Iterator&lt;V&gt; values,
+                          OutputCollector&lt;K, V&gt; output, 
+                          Reporter reporter)
+       throws IOException {
+       
+         // Process
+         int noValues = 0;
+         while (values.hasNext()) {
+           V value = values.next();
+           
+           // Increment the no. of values for this key
+           ++noValues;
+           
+           // Process the &lt;key, value&gt; pair (assume this takes a while)
+           // ...
+           // ...
+           
+           // Let the framework know that we are alive, and kicking!
+           if ((noValues%10) == 0) {
+             reporter.progress();
+           }
+         
+           // Process some more
+           // ...
+           // ...
+           
+           // Output the &lt;key, value&gt; 
+           output.collect(key, value);
+         }
+         
+         // Increment the no. of &lt;key, list of values&gt; pairs processed
+         ++noKeys;
+         
+         // Increment counters
+         reporter.incrCounter(NUM_RECORDS, 1);
+         
+         // Every 100 keys update application-level status
+         if ((noKeys%100) == 0) {
+           reporter.setStatus(reduceTaskId + " processed " + noKeys);
+         }
+       }
+     }
+ </pre></blockquote>
+ 
+ @see Mapper
+ @see Partitioner
+ @see Reporter
+ @see MapReduceBase]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.Reducer -->
+  <!-- start interface org.apache.hadoop.mapred.Reporter -->
+  <interface name="Reporter"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.util.Progressable"/>
+    <method name="setStatus"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="status" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Set the status description for the task.
+ 
+ @param status brief description of the current status.]]>
+      </doc>
+    </method>
+    <method name="getCounter" return="org.apache.hadoop.mapred.Counters.Counter"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="name" type="java.lang.Enum"/>
+      <doc>
+      <![CDATA[Get the {@link Counter} of the given group with the given name.
+ 
+ @param name counter name
+ @return the <code>Counter</code> of the given group/name.]]>
+      </doc>
+    </method>
+    <method name="getCounter" return="org.apache.hadoop.mapred.Counters.Counter"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="group" type="java.lang.String"/>
+      <param name="name" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Get the {@link Counter} of the given group with the given name.
+ 
+ @param group counter group
+ @param name counter name
+ @return the <code>Counter</code> of the given group/name.]]>
+      </doc>
+    </method>
+    <method name="incrCounter"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="key" type="java.lang.Enum"/>
+      <param name="amount" type="long"/>
+      <doc>
+      <![CDATA[Increments the counter identified by the key, which can be of
+ any {@link Enum} type, by the specified amount.
+ 
+ @param key key to identify the counter to be incremented. The key can be
+            be any <code>Enum</code>. 
+ @param amount A non-negative amount by which the counter is to 
+               be incremented.]]>
+      </doc>
+    </method>
+    <method name="incrCounter"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="group" type="java.lang.String"/>
+      <param name="counter" type="java.lang.String"/>
+      <param name="amount" type="long"/>
+      <doc>
+      <![CDATA[Increments the counter identified by the group and counter name
+ by the specified amount.
+ 
+ @param group name to identify the group of the counter to be incremented.
+ @param counter name to identify the counter within the group.
+ @param amount A non-negative amount by which the counter is to 
+               be incremented.]]>
+      </doc>
+    </method>
+    <method name="getInputSplit" return="org.apache.hadoop.mapred.InputSplit"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="UnsupportedOperationException" type="java.lang.UnsupportedOperationException"/>
+      <doc>
+      <![CDATA[Get the {@link InputSplit} object for a map.
+ 
+ @return the <code>InputSplit</code> that the map is reading from.
+ @throws UnsupportedOperationException if called outside a mapper]]>
+      </doc>
+    </method>
+    <method name="getProgress" return="float"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the progress of the task. Progress is represented as a number between
+ 0 and 1 (inclusive).]]>
+      </doc>
+    </method>
+    <field name="NULL" type="org.apache.hadoop.mapred.Reporter"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[A constant of Reporter type that does nothing.]]>
+      </doc>
+    </field>
+    <doc>
+    <![CDATA[A facility for Map-Reduce applications to report progress and update 
+ counters, status information etc.
+ 
+ <p>{@link Mapper} and {@link Reducer} can use the <code>Reporter</code>
+ provided to report progress or just indicate that they are alive. In 
+ scenarios where the application takes significant amount of time to
+ process individual key/value pairs, this is crucial since the framework 
+ might assume that the task has timed-out and kill that task.
+
+ <p>Applications can also update {@link Counters} via the provided 
+ <code>Reporter</code> .</p>
+ 
+ @see Progressable
+ @see Counters]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.Reporter -->
+  <!-- start interface org.apache.hadoop.mapred.RunningJob -->
+  <interface name="RunningJob"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <method name="getConfiguration" return="org.apache.hadoop.conf.Configuration"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the underlying job configuration
+
+ @return the configuration of the job.]]>
+      </doc>
+    </method>
+    <method name="getID" return="org.apache.hadoop.mapred.JobID"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the job identifier.
+ 
+ @return the job identifier.]]>
+      </doc>
+    </method>
+    <method name="getJobID" return="java.lang.String"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="This method is deprecated and will be removed. Applications should 
+ rather use {@link #getID()}.">
+      <doc>
+      <![CDATA[@deprecated This method is deprecated and will be removed. Applications should 
+ rather use {@link #getID()}.]]>
+      </doc>
+    </method>
+    <method name="getJobName" return="java.lang.String"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the name of the job.
+ 
+ @return the name of the job.]]>
+      </doc>
+    </method>
+    <method name="getJobFile" return="java.lang.String"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the path of the submitted job configuration.
+ 
+ @return the path of the submitted job configuration.]]>
+      </doc>
+    </method>
+    <method name="getTrackingURL" return="java.lang.String"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the URL where some job progress information will be displayed.
+ 
+ @return the URL where some job progress information will be displayed.]]>
+      </doc>
+    </method>
+    <method name="mapProgress" return="float"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get the <i>progress</i> of the job's map-tasks, as a float between 0.0 
+ and 1.0.  When all map tasks have completed, the function returns 1.0.
+ 
+ @return the progress of the job's map-tasks.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="reduceProgress" return="float"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get the <i>progress</i> of the job's reduce-tasks, as a float between 0.0 
+ and 1.0.  When all reduce tasks have completed, the function returns 1.0.
+ 
+ @return the progress of the job's reduce-tasks.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="cleanupProgress" return="float"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get the <i>progress</i> of the job's cleanup-tasks, as a float between 0.0 
+ and 1.0.  When all cleanup tasks have completed, the function returns 1.0.
+ 
+ @return the progress of the job's cleanup-tasks.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="setupProgress" return="float"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get the <i>progress</i> of the job's setup-tasks, as a float between 0.0 
+ and 1.0.  When all setup tasks have completed, the function returns 1.0.
+ 
+ @return the progress of the job's setup-tasks.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="isComplete" return="boolean"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Check if the job is finished or not. 
+ This is a non-blocking call.
+ 
+ @return <code>true</code> if the job is complete, else <code>false</code>.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="isSuccessful" return="boolean"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Check if the job completed successfully. 
+ 
+ @return <code>true</code> if the job succeeded, else <code>false</code>.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="waitForCompletion"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Blocks until the job is complete.
+ 
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getJobState" return="int"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Returns the current state of the Job.
+ {@link JobStatus}
+ 
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getJobStatus" return="org.apache.hadoop.mapred.JobStatus"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Returns a snapshot of the current status, {@link JobStatus}, of the Job.
+ Need to call again for latest information.
+ 
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="killJob"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Kill the running job. Blocks until all job tasks have been killed as well.
+ If the job is no longer running, it simply returns.
+ 
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="setJobPriority"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="priority" type="java.lang.String"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Set the priority of a running job.
+ @param priority the new priority for the job.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getTaskCompletionEvents" return="org.apache.hadoop.mapred.TaskCompletionEvent[]"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="startFrom" type="int"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get events indicating completion (success/failure) of component tasks.
+  
+ @param startFrom index to start fetching events from
+ @return an array of {@link TaskCompletionEvent}s
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="killTask"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="taskId" type="org.apache.hadoop.mapred.TaskAttemptID"/>
+      <param name="shouldFail" type="boolean"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Kill indicated task attempt.
+ 
+ @param taskId the id of the task to be terminated.
+ @param shouldFail if true the task is failed and added to failed tasks 
+                   list, otherwise it is just killed, w/o affecting 
+                   job failure status.  
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="killTask"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="Applications should rather use {@link #killTask(TaskAttemptID, boolean)}">
+      <param name="taskId" type="java.lang.String"/>
+      <param name="shouldFail" type="boolean"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[@deprecated Applications should rather use {@link #killTask(TaskAttemptID, boolean)}]]>
+      </doc>
+    </method>
+    <method name="getCounters" return="org.apache.hadoop.mapred.Counters"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Gets the counters for this job.
+ 
+ @return the counters for this job or null if the job has been retired.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getTaskDiagnostics" return="java.lang.String[]"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="taskid" type="org.apache.hadoop.mapred.TaskAttemptID"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Gets the diagnostic messages for a given task attempt.
+ @param taskid
+ @return the list of diagnostic messages for the task
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getHistoryUrl" return="java.lang.String"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get the url where history file is archived. Returns empty string if 
+ history file is not available yet. 
+ 
+ @return the url where history file is archived
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="isRetired" return="boolean"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Check whether the job has been removed from JobTracker memory and retired.
+ On retire, the job history file is copied to a location known by 
+ {@link #getHistoryUrl()}
+ @return <code>true</code> if the job retired, else <code>false</code>.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <method name="getFailureInfo" return="java.lang.String"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Get failure info for the job.
+ @return the failure info for the job.
+ @throws IOException]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[<code>RunningJob</code> is the user-interface to query for details on a 
+ running Map-Reduce job.
+ 
+ <p>Clients can get hold of <code>RunningJob</code> via the {@link JobClient}
+ and then query the running-job for details such as name, configuration, 
+ progress etc.</p> 
+ 
+ @see JobClient]]>
+    </doc>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.RunningJob -->
+  <!-- start class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat -->
+  <class name="SequenceFileAsBinaryInputFormat" extends="org.apache.hadoop.mapred.SequenceFileInputFormat"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="SequenceFileAsBinaryInputFormat"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getRecordReader" return="org.apache.hadoop.mapred.RecordReader"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="split" type="org.apache.hadoop.mapred.InputSplit"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="reporter" type="org.apache.hadoop.mapred.Reporter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <doc>
+    <![CDATA[InputFormat reading keys, values from SequenceFiles in binary (raw)
+ format.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat -->
+  <!-- start class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat -->
+  <class name="SequenceFileAsBinaryOutputFormat" extends="org.apache.hadoop.mapred.SequenceFileOutputFormat"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="SequenceFileAsBinaryOutputFormat"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="setSequenceFileOutputKeyClass"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the key class for the {@link SequenceFile}
+ <p>This allows the user to specify the key class to be different 
+ from the actual class ({@link BytesWritable}) used for writing </p>
+ 
+ @param conf the {@link JobConf} to modify
+ @param theClass the SequenceFile output key class.]]>
+      </doc>
+    </method>
+    <method name="setSequenceFileOutputValueClass"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="theClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[Set the value class for the {@link SequenceFile}
+ <p>This allows the user to specify the value class to be different 
+ from the actual class ({@link BytesWritable}) used for writing </p>
+ 
+ @param conf the {@link JobConf} to modify
+ @param theClass the SequenceFile output key class.]]>
+      </doc>
+    </method>
+    <method name="getSequenceFileOutputKeyClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <doc>
+      <![CDATA[Get the key class for the {@link SequenceFile}
+ 
+ @return the key class of the {@link SequenceFile}]]>
+      </doc>
+    </method>
+    <method name="getSequenceFileOutputValueClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <doc>
+      <![CDATA[Get the value class for the {@link SequenceFile}
+ 
+ @return the value class of the {@link SequenceFile}]]>
+      </doc>
+    </method>
+    <method name="getRecordWriter" return="org.apache.hadoop.mapred.RecordWriter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="ignored" type="org.apache.hadoop.fs.FileSystem"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="name" type="java.lang.String"/>
+      <param name="progress" type="org.apache.hadoop.util.Progressable"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="checkOutputSpecs"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="ignored" type="org.apache.hadoop.fs.FileSystem"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <doc>
+    <![CDATA[An {@link OutputFormat} that writes keys, values to 
+ {@link SequenceFile}s in binary(raw) format]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat -->
+  <!-- start class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat -->
+  <class name="SequenceFileAsTextInputFormat" extends="org.apache.hadoop.mapred.SequenceFileInputFormat"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="SequenceFileAsTextInputFormat"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getRecordReader" return="org.apache.hadoop.mapred.RecordReader"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="split" type="org.apache.hadoop.mapred.InputSplit"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="reporter" type="org.apache.hadoop.mapred.Reporter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <doc>
+    <![CDATA[This class is similar to SequenceFileInputFormat, 
+ except it generates SequenceFileAsTextRecordReader 
+ which converts the input keys and values to their 
+ String forms by calling toString() method.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat -->
+  <!-- start class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader -->
+  <class name="SequenceFileAsTextRecordReader" extends="java.lang.Object"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapred.RecordReader"/>
+    <constructor name="SequenceFileAsTextRecordReader" type="org.apache.hadoop.conf.Configuration, org.apache.hadoop.mapred.FileSplit"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </constructor>
+    <method name="createKey" return="org.apache.hadoop.io.Text"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="createValue" return="org.apache.hadoop.io.Text"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="next" return="boolean"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="key" type="org.apache.hadoop.io.Text"/>
+      <param name="value" type="org.apache.hadoop.io.Text"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Read key/value pair in a line.]]>
+      </doc>
+    </method>
+    <method name="getProgress" return="float"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getPos" return="long"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="close"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <doc>
+    <![CDATA[This class converts the input keys and values to their String forms by calling toString()
+ method. This class to SequenceFileAsTextInputFormat class is as LineRecordReader
+ class to TextInputFormat class.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader -->
+  <!-- start class org.apache.hadoop.mapred.SequenceFileInputFilter -->
+  <class name="SequenceFileInputFilter" extends="org.apache.hadoop.mapred.SequenceFileInputFormat"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="SequenceFileInputFilter"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getRecordReader" return="org.apache.hadoop.mapred.RecordReader"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="split" type="org.apache.hadoop.mapred.InputSplit"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="reporter" type="org.apache.hadoop.mapred.Reporter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Create a record reader for the given split
+ @param split file split
+ @param job job configuration
+ @param reporter reporter who sends report to task tracker
+ @return RecordReader]]>
+      </doc>
+    </method>
+    <method name="setFilterClass"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="filterClass" type="java.lang.Class"/>
+      <doc>
+      <![CDATA[set the filter class
+ 
+ @param conf application configuration
+ @param filterClass filter class]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[A class that allows a map/red job to work on a sample of sequence files.
+ The sample is decided by the filter class set by the job.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.SequenceFileInputFilter -->
+  <!-- start class org.apache.hadoop.mapred.SequenceFileInputFormat -->
+  <class name="SequenceFileInputFormat" extends="org.apache.hadoop.mapred.FileInputFormat"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="SequenceFileInputFormat"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="listStatus" return="org.apache.hadoop.fs.FileStatus[]"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getRecordReader" return="org.apache.hadoop.mapred.RecordReader"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="split" type="org.apache.hadoop.mapred.InputSplit"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="reporter" type="org.apache.hadoop.mapred.Reporter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <doc>
+    <![CDATA[An {@link InputFormat} for {@link SequenceFile}s.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.SequenceFileInputFormat -->
+  <!-- start class org.apache.hadoop.mapred.SequenceFileOutputFormat -->
+  <class name="SequenceFileOutputFormat" extends="org.apache.hadoop.mapred.FileOutputFormat"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="SequenceFileOutputFormat"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getRecordWriter" return="org.apache.hadoop.mapred.RecordWriter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="ignored" type="org.apache.hadoop.fs.FileSystem"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="name" type="java.lang.String"/>
+      <param name="progress" type="org.apache.hadoop.util.Progressable"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getReaders" return="org.apache.hadoop.io.SequenceFile.Reader[]"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="dir" type="org.apache.hadoop.fs.Path"/>
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Open the output generated by this format.]]>
+      </doc>
+    </method>
+    <method name="getOutputCompressionType" return="org.apache.hadoop.io.SequenceFile.CompressionType"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <doc>
+      <![CDATA[Get the {@link CompressionType} for the output {@link SequenceFile}.
+ @param conf the {@link JobConf}
+ @return the {@link CompressionType} for the output {@link SequenceFile}, 
+         defaulting to {@link CompressionType#RECORD}]]>
+      </doc>
+    </method>
+    <method name="setOutputCompressionType"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="style" type="org.apache.hadoop.io.SequenceFile.CompressionType"/>
+      <doc>
+      <![CDATA[Set the {@link CompressionType} for the output {@link SequenceFile}.
+ @param conf the {@link JobConf} to modify
+ @param style the {@link CompressionType} for the output
+              {@link SequenceFile}]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[An {@link OutputFormat} that writes {@link SequenceFile}s.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.SequenceFileOutputFormat -->
+  <!-- start class org.apache.hadoop.mapred.SequenceFileRecordReader -->
+  <class name="SequenceFileRecordReader" extends="java.lang.Object"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapred.RecordReader"/>
+    <constructor name="SequenceFileRecordReader" type="org.apache.hadoop.conf.Configuration, org.apache.hadoop.mapred.FileSplit"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </constructor>
+    <method name="getKeyClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The class of key that must be passed to {@link
+ #next(Object, Object)}..]]>
+      </doc>
+    </method>
+    <method name="getValueClass" return="java.lang.Class"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The class of value that must be passed to {@link
+ #next(Object, Object)}..]]>
+      </doc>
+    </method>
+    <method name="createKey" return="K"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="createValue" return="V"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="next" return="boolean"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="key" type="K"/>
+      <param name="value" type="V"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="next" return="boolean"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="key" type="K"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getCurrentValue"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="value" type="V"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getProgress" return="float"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Return the progress within the input split
+ @return 0.0 to 1.0 of the input byte range]]>
+      </doc>
+    </method>
+    <method name="getPos" return="long"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="seek"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="pos" type="long"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="close"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <field name="conf" type="org.apache.hadoop.conf.Configuration"
+      transient="false" volatile="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+    </field>
+    <doc>
+    <![CDATA[An {@link RecordReader} for {@link SequenceFile}s.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.SequenceFileRecordReader -->
+  <!-- start class org.apache.hadoop.mapred.SkipBadRecords -->
+  <class name="SkipBadRecords" extends="java.lang.Object"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="SkipBadRecords"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getAttemptsToStartSkipping" return="int"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <doc>
+      <![CDATA[Get the number of Task attempts AFTER which skip mode 
+ will be kicked off. When skip mode is kicked off, the 
+ tasks reports the range of records which it will process 
+ next to the TaskTracker. So that on failures, TT knows which 
+ ones are possibly the bad records. On further executions, 
+ those are skipped.
+ Default value is 2.
+ 
+ @param conf the configuration
+ @return attemptsToStartSkipping no of task attempts]]>
+      </doc>
+    </method>
+    <method name="setAttemptsToStartSkipping"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="attemptsToStartSkipping" type="int"/>
+      <doc>
+      <![CDATA[Set the number of Task attempts AFTER which skip mode 
+ will be kicked off. When skip mode is kicked off, the 
+ tasks reports the range of records which it will process 
+ next to the TaskTracker. So that on failures, TT knows which 
+ ones are possibly the bad records. On further executions, 
+ those are skipped.
+ Default value is 2.
+ 
+ @param conf the configuration
+ @param attemptsToStartSkipping no of task attempts]]>
+      </doc>
+    </method>
+    <method name="getAutoIncrMapperProcCount" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <doc>
+      <![CDATA[Get the flag which if set to true, 
+ {@link SkipBadRecords#COUNTER_MAP_PROCESSED_RECORDS} is incremented 
+ by MapRunner after invoking the map function. This value must be set to 
+ false for applications which process the records asynchronously 
+ or buffer the input records. For example streaming. 
+ In such cases applications should increment this counter on their own.
+ Default value is true.
+ 
+ @param conf the configuration
+ @return <code>true</code> if auto increment 
+                       {@link SkipBadRecords#COUNTER_MAP_PROCESSED_RECORDS}.
+         <code>false</code> otherwise.]]>
+      </doc>
+    </method>
+    <method name="setAutoIncrMapperProcCount"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="autoIncr" type="boolean"/>
+      <doc>
+      <![CDATA[Set the flag which if set to true, 
+ {@link SkipBadRecords#COUNTER_MAP_PROCESSED_RECORDS} is incremented 
+ by MapRunner after invoking the map function. This value must be set to 
+ false for applications which process the records asynchronously 
+ or buffer the input records. For example streaming. 
+ In such cases applications should increment this counter on their own.
+ Default value is true.
+ 
+ @param conf the configuration
+ @param autoIncr whether to auto increment 
+        {@link SkipBadRecords#COUNTER_MAP_PROCESSED_RECORDS}.]]>
+      </doc>
+    </method>
+    <method name="getAutoIncrReducerProcCount" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <doc>
+      <![CDATA[Get the flag which if set to true, 
+ {@link SkipBadRecords#COUNTER_REDUCE_PROCESSED_GROUPS} is incremented 
+ by framework after invoking the reduce function. This value must be set to 
+ false for applications which process the records asynchronously 
+ or buffer the input records. For example streaming. 
+ In such cases applications should increment this counter on their own.
+ Default value is true.
+ 
+ @param conf the configuration
+ @return <code>true</code> if auto increment 
+                    {@link SkipBadRecords#COUNTER_REDUCE_PROCESSED_GROUPS}.
+         <code>false</code> otherwise.]]>
+      </doc>
+    </method>
+    <method name="setAutoIncrReducerProcCount"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="autoIncr" type="boolean"/>
+      <doc>
+      <![CDATA[Set the flag which if set to true, 
+ {@link SkipBadRecords#COUNTER_REDUCE_PROCESSED_GROUPS} is incremented 
+ by framework after invoking the reduce function. This value must be set to 
+ false for applications which process the records asynchronously 
+ or buffer the input records. For example streaming. 
+ In such cases applications should increment this counter on their own.
+ Default value is true.
+ 
+ @param conf the configuration
+ @param autoIncr whether to auto increment 
+        {@link SkipBadRecords#COUNTER_REDUCE_PROCESSED_GROUPS}.]]>
+      </doc>
+    </method>
+    <method name="getSkipOutputPath" return="org.apache.hadoop.fs.Path"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <doc>
+      <![CDATA[Get the directory to which skipped records are written. By default it is 
+ the sub directory of the output _logs directory.
+ User can stop writing skipped records by setting the value null.
+ 
+ @param conf the configuration.
+ @return path skip output directory. Null is returned if this is not set 
+ and output directory is also not set.]]>
+      </doc>
+    </method>
+    <method name="setSkipOutputPath"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="path" type="org.apache.hadoop.fs.Path"/>
+      <doc>
+      <![CDATA[Set the directory to which skipped records are written. By default it is 
+ the sub directory of the output _logs directory.
+ User can stop writing skipped records by setting the value null.
+ 
+ @param conf the configuration.
+ @param path skip output directory path]]>
+      </doc>
+    </method>
+    <method name="getMapperMaxSkipRecords" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <doc>
+      <![CDATA[Get the number of acceptable skip records surrounding the bad record PER 
+ bad record in mapper. The number includes the bad record as well.
+ To turn the feature of detection/skipping of bad records off, set the 
+ value to 0.
+ The framework tries to narrow down the skipped range by retrying  
+ until this threshold is met OR all attempts get exhausted for this task. 
+ Set the value to Long.MAX_VALUE to indicate that framework need not try to 
+ narrow down. Whatever records(depends on application) get skipped are 
+ acceptable.
+ Default value is 0.
+ 
+ @param conf the configuration
+ @return maxSkipRecs acceptable skip records.]]>
+      </doc>
+    </method>
+    <method name="setMapperMaxSkipRecords"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="maxSkipRecs" type="long"/>
+      <doc>
+      <![CDATA[Set the number of acceptable skip records surrounding the bad record PER 
+ bad record in mapper. The number includes the bad record as well.
+ To turn the feature of detection/skipping of bad records off, set the 
+ value to 0.
+ The framework tries to narrow down the skipped range by retrying  
+ until this threshold is met OR all attempts get exhausted for this task. 
+ Set the value to Long.MAX_VALUE to indicate that framework need not try to 
+ narrow down. Whatever records(depends on application) get skipped are 
+ acceptable.
+ Default value is 0.
+ 
+ @param conf the configuration
+ @param maxSkipRecs acceptable skip records.]]>
+      </doc>
+    </method>
+    <method name="getReducerMaxSkipGroups" return="long"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <doc>
+      <![CDATA[Get the number of acceptable skip groups surrounding the bad group PER 
+ bad group in reducer. The number includes the bad group as well.
+ To turn the feature of detection/skipping of bad groups off, set the 
+ value to 0.
+ The framework tries to narrow down the skipped range by retrying  
+ until this threshold is met OR all attempts get exhausted for this task. 
+ Set the value to Long.MAX_VALUE to indicate that framework need not try to 
+ narrow down. Whatever groups(depends on application) get skipped are 
+ acceptable.
+ Default value is 0.
+ 
+ @param conf the configuration
+ @return maxSkipGrps acceptable skip groups.]]>
+      </doc>
+    </method>
+    <method name="setReducerMaxSkipGroups"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.conf.Configuration"/>
+      <param name="maxSkipGrps" type="long"/>
+      <doc>
+      <![CDATA[Set the number of acceptable skip groups surrounding the bad group PER 
+ bad group in reducer. The number includes the bad group as well.
+ To turn the feature of detection/skipping of bad groups off, set the 
+ value to 0.
+ The framework tries to narrow down the skipped range by retrying  
+ until this threshold is met OR all attempts get exhausted for this task. 
+ Set the value to Long.MAX_VALUE to indicate that framework need not try to 
+ narrow down. Whatever groups(depends on application) get skipped are 
+ acceptable.
+ Default value is 0.
+ 
+ @param conf the configuration
+ @param maxSkipGrps acceptable skip groups.]]>
+      </doc>
+    </method>
+    <field name="COUNTER_GROUP" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Special counters which are written by the application and are 
+ used by the framework for detecting bad records. For detecting bad records 
+ these counters must be incremented by the application.]]>
+      </doc>
+    </field>
+    <field name="COUNTER_MAP_PROCESSED_RECORDS" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Number of processed map records.
+ @see SkipBadRecords#getAutoIncrMapperProcCount(Configuration)]]>
+      </doc>
+    </field>
+    <field name="COUNTER_REDUCE_PROCESSED_GROUPS" type="java.lang.String"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Number of processed reduce groups.
+ @see SkipBadRecords#getAutoIncrReducerProcCount(Configuration)]]>
+      </doc>
+    </field>
+    <doc>
+    <![CDATA[Utility class for skip bad records functionality. It contains various 
+ settings related to skipping of bad records.
+ 
+ <p>Hadoop provides an optional mode of execution in which the bad records
+ are detected and skipped in further attempts.
+ 
+ <p>This feature can be used when map/reduce tasks crashes deterministically on 
+ certain input. This happens due to bugs in the map/reduce function. The usual
+ course would be to fix these bugs. But sometimes this is not possible; 
+ perhaps the bug is in third party libraries for which the source code is 
+ not available. Due to this, the task never reaches to completion even with 
+ multiple attempts and complete data for that task is lost.</p>
+  
+ <p>With this feature, only a small portion of data is lost surrounding 
+ the bad record, which may be acceptable for some user applications.
+ see {@link SkipBadRecords#setMapperMaxSkipRecords(Configuration, long)}</p>
+ 
+ <p>The skipping mode gets kicked off after certain no of failures 
+ see {@link SkipBadRecords#setAttemptsToStartSkipping(Configuration, int)}</p>
+  
+ <p>In the skipping mode, the map/reduce task maintains the record range which 
+ is getting processed at all times. Before giving the input to the
+ map/reduce function, it sends this record range to the Task tracker.
+ If task crashes, the Task tracker knows which one was the last reported
+ range. On further attempts that range get skipped.</p>]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.SkipBadRecords -->
+  <!-- start class org.apache.hadoop.mapred.SplitLocationInfo -->
+  <class name="SplitLocationInfo" extends="java.lang.Object"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="SplitLocationInfo" type="java.lang.String, boolean"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="isOnDisk" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="isInMemory" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="getLocation" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.SplitLocationInfo -->
+  <!-- start interface org.apache.hadoop.mapred.TaskAttemptContext -->
+  <interface name="TaskAttemptContext"    abstract="true"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapreduce.TaskAttemptContext"/>
+    <method name="getTaskAttemptID" return="org.apache.hadoop.mapred.TaskAttemptID"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="getProgressible" return="org.apache.hadoop.util.Progressable"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="getJobConf" return="org.apache.hadoop.mapred.JobConf"
+      abstract="true" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+  </interface>
+  <!-- end interface org.apache.hadoop.mapred.TaskAttemptContext -->
+  <!-- start class org.apache.hadoop.mapred.TaskAttemptID -->
+  <class name="TaskAttemptID" extends="org.apache.hadoop.mapreduce.TaskAttemptID"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="TaskAttemptID" type="org.apache.hadoop.mapred.TaskID, int"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Constructs a TaskAttemptID object from given {@link TaskID}.  
+ @param taskId TaskID that this task belongs to  
+ @param id the task attempt number]]>
+      </doc>
+    </constructor>
+    <constructor name="TaskAttemptID" type="java.lang.String, int, boolean, int, int"
+      static="false" final="false" visibility="public"
+      deprecated="Use {@link #TaskAttemptID(String, int, TaskType, int, int)}.">
+      <doc>
+      <![CDATA[Constructs a TaskId object from given parts.
+ @param jtIdentifier jobTracker identifier
+ @param jobId job number 
+ @param isMap whether the tip is a map 
+ @param taskId taskId number
+ @param id the task attempt number
+ @deprecated Use {@link #TaskAttemptID(String, int, TaskType, int, int)}.]]>
+      </doc>
+    </constructor>
+    <constructor name="TaskAttemptID" type="java.lang.String, int, org.apache.hadoop.mapreduce.TaskType, int, int"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Constructs a TaskId object from given parts.
+ @param jtIdentifier jobTracker identifier
+ @param jobId job number 
+ @param type the TaskType 
+ @param taskId taskId number
+ @param id the task attempt number]]>
+      </doc>
+    </constructor>
+    <constructor name="TaskAttemptID"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="downgrade" return="org.apache.hadoop.mapred.TaskAttemptID"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="old" type="org.apache.hadoop.mapreduce.TaskAttemptID"/>
+      <doc>
+      <![CDATA[Downgrade a new TaskAttemptID to an old one
+ @param old the new id
+ @return either old or a new TaskAttemptID constructed to match old]]>
+      </doc>
+    </method>
+    <method name="getTaskID" return="org.apache.hadoop.mapred.TaskID"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="getJobID" return="org.apache.hadoop.mapred.JobID"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="read" return="org.apache.hadoop.mapred.TaskAttemptID"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="in" type="java.io.DataInput"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="forName" return="org.apache.hadoop.mapred.TaskAttemptID"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="str" type="java.lang.String"/>
+      <exception name="IllegalArgumentException" type="java.lang.IllegalArgumentException"/>
+      <doc>
+      <![CDATA[Construct a TaskAttemptID object from given string 
+ @return constructed TaskAttemptID object or null if the given String is null
+ @throws IllegalArgumentException if the given string is malformed]]>
+      </doc>
+    </method>
+    <method name="getTaskAttemptIDsPattern" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jtIdentifier" type="java.lang.String"/>
+      <param name="jobId" type="java.lang.Integer"/>
+      <param name="isMap" type="java.lang.Boolean"/>
+      <param name="taskId" type="java.lang.Integer"/>
+      <param name="attemptId" type="java.lang.Integer"/>
+      <doc>
+      <![CDATA[Returns a regex pattern which matches task attempt IDs. Arguments can 
+ be given null, in which case that part of the regex will be generic.  
+ For example to obtain a regex matching <i>all task attempt IDs</i> 
+ of <i>any jobtracker</i>, in <i>any job</i>, of the <i>first 
+ map task</i>, we would use :
+ <pre> 
+ TaskAttemptID.getTaskAttemptIDsPattern(null, null, true, 1, null);
+ </pre>
+ which will return :
+ <pre> "attempt_[^_]*_[0-9]*_m_000001_[0-9]*" </pre> 
+ @param jtIdentifier jobTracker identifier, or null
+ @param jobId job number, or null
+ @param isMap whether the tip is a map, or null 
+ @param taskId taskId number, or null
+ @param attemptId the task attempt number, or null
+ @return a regex pattern matching TaskAttemptIDs]]>
+      </doc>
+    </method>
+    <method name="getTaskAttemptIDsPattern" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jtIdentifier" type="java.lang.String"/>
+      <param name="jobId" type="java.lang.Integer"/>
+      <param name="type" type="org.apache.hadoop.mapreduce.TaskType"/>
+      <param name="taskId" type="java.lang.Integer"/>
+      <param name="attemptId" type="java.lang.Integer"/>
+      <doc>
+      <![CDATA[Returns a regex pattern which matches task attempt IDs. Arguments can 
+ be given null, in which case that part of the regex will be generic.  
+ For example to obtain a regex matching <i>all task attempt IDs</i> 
+ of <i>any jobtracker</i>, in <i>any job</i>, of the <i>first 
+ map task</i>, we would use :
+ <pre> 
+ TaskAttemptID.getTaskAttemptIDsPattern(null, null, TaskType.MAP, 1, null);
+ </pre>
+ which will return :
+ <pre> "attempt_[^_]*_[0-9]*_m_000001_[0-9]*" </pre> 
+ @param jtIdentifier jobTracker identifier, or null
+ @param jobId job number, or null
+ @param type the {@link TaskType} 
+ @param taskId taskId number, or null
+ @param attemptId the task attempt number, or null
+ @return a regex pattern matching TaskAttemptIDs]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[TaskAttemptID represents the immutable and unique identifier for 
+ a task attempt. Each task attempt is one particular instance of a Map or
+ Reduce Task identified by its TaskID. 
+ 
+ TaskAttemptID consists of 2 parts. First part is the 
+ {@link TaskID}, that this TaskAttemptID belongs to.
+ Second part is the task attempt number. <br> 
+ An example TaskAttemptID is : 
+ <code>attempt_200707121733_0003_m_000005_0</code> , which represents the
+ zeroth task attempt for the fifth map task in the third job 
+ running at the jobtracker started at <code>200707121733</code>.
+ <p>
+ Applications should never construct or parse TaskAttemptID strings
+ , but rather use appropriate constructors or {@link #forName(String)} 
+ method. 
+ 
+ @see JobID
+ @see TaskID]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.TaskAttemptID -->
+  <!-- start class org.apache.hadoop.mapred.TaskCompletionEvent -->
+  <class name="TaskCompletionEvent" extends="org.apache.hadoop.mapreduce.TaskCompletionEvent"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="TaskCompletionEvent"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Default constructor for Writable.]]>
+      </doc>
+    </constructor>
+    <constructor name="TaskCompletionEvent" type="int, org.apache.hadoop.mapred.TaskAttemptID, int, boolean, org.apache.hadoop.mapred.TaskCompletionEvent.Status, java.lang.String"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Constructor. eventId should be created externally and incremented
+ per event for each job. 
+ @param eventId event id, event id should be unique and assigned in
+  incrementally, starting from 0. 
+ @param taskId task id
+ @param status task's status 
+ @param taskTrackerHttp task tracker's host:port for http.]]>
+      </doc>
+    </constructor>
+    <method name="getTaskId" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="use {@link #getTaskAttemptId()} instead.">
+      <doc>
+      <![CDATA[Returns task id. 
+ @return task id
+ @deprecated use {@link #getTaskAttemptId()} instead.]]>
+      </doc>
+    </method>
+    <method name="getTaskAttemptId" return="org.apache.hadoop.mapred.TaskAttemptID"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Returns task id. 
+ @return task id]]>
+      </doc>
+    </method>
+    <method name="getTaskStatus" return="org.apache.hadoop.mapred.TaskCompletionEvent.Status"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Returns {@link Status}
+ @return task completion status]]>
+      </doc>
+    </method>
+    <method name="setTaskId"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="use {@link #setTaskAttemptId(TaskAttemptID)} instead.">
+      <param name="taskId" type="java.lang.String"/>
+      <doc>
+      <![CDATA[Sets task id. 
+ @param taskId
+ @deprecated use {@link #setTaskAttemptId(TaskAttemptID)} instead.]]>
+      </doc>
+    </method>
+    <method name="setTaskID"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="use {@link #setTaskAttemptId(TaskAttemptID)} instead.">
+      <param name="taskId" type="org.apache.hadoop.mapred.TaskAttemptID"/>
+      <doc>
+      <![CDATA[Sets task id.
+ @param taskId
+ @deprecated use {@link #setTaskAttemptId(TaskAttemptID)} instead.]]>
+      </doc>
+    </method>
+    <method name="setTaskAttemptId"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="taskId" type="org.apache.hadoop.mapred.TaskAttemptID"/>
+      <doc>
+      <![CDATA[Sets task id. 
+ @param taskId]]>
+      </doc>
+    </method>
+    <field name="EMPTY_ARRAY" type="org.apache.hadoop.mapred.TaskCompletionEvent[]"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <doc>
+    <![CDATA[This is used to track task completion events on 
+ job tracker.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.TaskCompletionEvent -->
+  <!-- start class org.apache.hadoop.mapred.TaskCompletionEvent.Status -->
+  <class name="TaskCompletionEvent.Status" extends="java.lang.Enum"
+    abstract="false"
+    static="true" final="true" visibility="public"
+    deprecated="not deprecated">
+    <method name="values" return="org.apache.hadoop.mapred.TaskCompletionEvent.Status[]"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="valueOf" return="org.apache.hadoop.mapred.TaskCompletionEvent.Status"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="name" type="java.lang.String"/>
+    </method>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.TaskCompletionEvent.Status -->
+  <!-- start class org.apache.hadoop.mapred.TaskID -->
+  <class name="TaskID" extends="org.apache.hadoop.mapreduce.TaskID"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="TaskID" type="org.apache.hadoop.mapreduce.JobID, boolean, int"
+      static="false" final="false" visibility="public"
+      deprecated="Use {@link #TaskID(String, int, TaskType, int)}">
+      <doc>
+      <![CDATA[Constructs a TaskID object from given {@link JobID}.  
+ @param jobId JobID that this tip belongs to 
+ @param isMap whether the tip is a map 
+ @param id the tip number
+ @deprecated Use {@link #TaskID(String, int, TaskType, int)}]]>
+      </doc>
+    </constructor>
+    <constructor name="TaskID" type="java.lang.String, int, boolean, int"
+      static="false" final="false" visibility="public"
+      deprecated="Use {@link #TaskID(org.apache.hadoop.mapreduce.JobID, TaskType,
+ int)}">
+      <doc>
+      <![CDATA[Constructs a TaskInProgressId object from given parts.
+ @param jtIdentifier jobTracker identifier
+ @param jobId job number 
+ @param isMap whether the tip is a map 
+ @param id the tip number
+ @deprecated Use {@link #TaskID(org.apache.hadoop.mapreduce.JobID, TaskType,
+ int)}]]>
+      </doc>
+    </constructor>
+    <constructor name="TaskID" type="org.apache.hadoop.mapreduce.JobID, org.apache.hadoop.mapreduce.TaskType, int"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Constructs a TaskID object from given {@link JobID}.  
+ @param jobId JobID that this tip belongs to 
+ @param type the {@link TaskType} 
+ @param id the tip number]]>
+      </doc>
+    </constructor>
+    <constructor name="TaskID" type="java.lang.String, int, org.apache.hadoop.mapreduce.TaskType, int"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Constructs a TaskInProgressId object from given parts.
+ @param jtIdentifier jobTracker identifier
+ @param jobId job number 
+ @param type the {@link TaskType} 
+ @param id the tip number]]>
+      </doc>
+    </constructor>
+    <constructor name="TaskID"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="downgrade" return="org.apache.hadoop.mapred.TaskID"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="old" type="org.apache.hadoop.mapreduce.TaskID"/>
+      <doc>
+      <![CDATA[Downgrade a new TaskID to an old one
+ @param old a new or old TaskID
+ @return either old or a new TaskID build to match old]]>
+      </doc>
+    </method>
+    <method name="read" return="org.apache.hadoop.mapred.TaskID"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="in" type="java.io.DataInput"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <method name="getJobID" return="org.apache.hadoop.mapred.JobID"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="getTaskIDsPattern" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="Use {@link TaskID#getTaskIDsPattern(String, Integer, TaskType,
+ Integer)}">
+      <param name="jtIdentifier" type="java.lang.String"/>
+      <param name="jobId" type="java.lang.Integer"/>
+      <param name="isMap" type="java.lang.Boolean"/>
+      <param name="taskId" type="java.lang.Integer"/>
+      <doc>
+      <![CDATA[Returns a regex pattern which matches task IDs. Arguments can 
+ be given null, in which case that part of the regex will be generic.  
+ For example to obtain a regex matching <i>the first map task</i> 
+ of <i>any jobtracker</i>, of <i>any job</i>, we would use :
+ <pre> 
+ TaskID.getTaskIDsPattern(null, null, true, 1);
+ </pre>
+ which will return :
+ <pre> "task_[^_]*_[0-9]*_m_000001*" </pre> 
+ @param jtIdentifier jobTracker identifier, or null
+ @param jobId job number, or null
+ @param isMap whether the tip is a map, or null 
+ @param taskId taskId number, or null
+ @return a regex pattern matching TaskIDs
+ @deprecated Use {@link TaskID#getTaskIDsPattern(String, Integer, TaskType,
+ Integer)}]]>
+      </doc>
+    </method>
+    <method name="getTaskIDsPattern" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jtIdentifier" type="java.lang.String"/>
+      <param name="jobId" type="java.lang.Integer"/>
+      <param name="type" type="org.apache.hadoop.mapreduce.TaskType"/>
+      <param name="taskId" type="java.lang.Integer"/>
+      <doc>
+      <![CDATA[Returns a regex pattern which matches task IDs. Arguments can 
+ be given null, in which case that part of the regex will be generic.  
+ For example to obtain a regex matching <i>the first map task</i> 
+ of <i>any jobtracker</i>, of <i>any job</i>, we would use :
+ <pre> 
+ TaskID.getTaskIDsPattern(null, null, true, 1);
+ </pre>
+ which will return :
+ <pre> "task_[^_]*_[0-9]*_m_000001*" </pre> 
+ @param jtIdentifier jobTracker identifier, or null
+ @param jobId job number, or null
+ @param type the {@link TaskType}, or null 
+ @param taskId taskId number, or null
+ @return a regex pattern matching TaskIDs]]>
+      </doc>
+    </method>
+    <method name="forName" return="org.apache.hadoop.mapred.TaskID"
+      abstract="false" native="false" synchronized="false"
+      static="true" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="str" type="java.lang.String"/>
+      <exception name="IllegalArgumentException" type="java.lang.IllegalArgumentException"/>
+    </method>
+    <doc>
+    <![CDATA[TaskID represents the immutable and unique identifier for 
+ a Map or Reduce Task. Each TaskID encompasses multiple attempts made to
+ execute the Map or Reduce Task, each of which are uniquely indentified by
+ their TaskAttemptID.
+ 
+ TaskID consists of 3 parts. First part is the {@link JobID}, that this 
+ TaskInProgress belongs to. Second part of the TaskID is either 'm' or 'r' 
+ representing whether the task is a map task or a reduce task. 
+ And the third part is the task number. <br> 
+ An example TaskID is : 
+ <code>task_200707121733_0003_m_000005</code> , which represents the
+ fifth map task in the third job running at the jobtracker 
+ started at <code>200707121733</code>. 
+ <p>
+ Applications should never construct or parse TaskID strings
+ , but rather use appropriate constructors or {@link #forName(String)} 
+ method. 
+ 
+ @see JobID
+ @see TaskAttemptID]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.TaskID -->
+  <!-- start class org.apache.hadoop.mapred.TaskReport -->
+  <class name="TaskReport" extends="org.apache.hadoop.mapreduce.TaskReport"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="TaskReport"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getTaskId" return="java.lang.String"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The string of the task id.]]>
+      </doc>
+    </method>
+    <method name="getTaskID" return="org.apache.hadoop.mapred.TaskID"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[The id of the task.]]>
+      </doc>
+    </method>
+    <method name="getCounters" return="org.apache.hadoop.mapred.Counters"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </method>
+    <method name="setSuccessfulAttempt"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="t" type="org.apache.hadoop.mapred.TaskAttemptID"/>
+      <doc>
+      <![CDATA[set successful attempt ID of the task.]]>
+      </doc>
+    </method>
+    <method name="getSuccessfulTaskAttempt" return="org.apache.hadoop.mapred.TaskAttemptID"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the attempt ID that took this task to completion]]>
+      </doc>
+    </method>
+    <method name="setRunningTaskAttempts"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="runningAttempts" type="java.util.Collection"/>
+      <doc>
+      <![CDATA[set running attempt(s) of the task.]]>
+      </doc>
+    </method>
+    <method name="getRunningTaskAttempts" return="java.util.Collection"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[Get the running task attempt IDs for this task]]>
+      </doc>
+    </method>
+    <method name="setFinishTime"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="finishTime" type="long"/>
+      <doc>
+      <![CDATA[set finish time of task. 
+ @param finishTime finish time of task.]]>
+      </doc>
+    </method>
+    <method name="setStartTime"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="startTime" type="long"/>
+      <doc>
+      <![CDATA[set start time of the task.]]>
+      </doc>
+    </method>
+    <doc>
+    <![CDATA[A report on the state of a task.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.TaskReport -->
+  <!-- start class org.apache.hadoop.mapred.TextInputFormat -->
+  <class name="TextInputFormat" extends="org.apache.hadoop.mapred.FileInputFormat"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <implements name="org.apache.hadoop.mapred.JobConfigurable"/>
+    <constructor name="TextInputFormat"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="configure"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="conf" type="org.apache.hadoop.mapred.JobConf"/>
+    </method>
+    <method name="isSplitable" return="boolean"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="fs" type="org.apache.hadoop.fs.FileSystem"/>
+      <param name="file" type="org.apache.hadoop.fs.Path"/>
+    </method>
+    <method name="getRecordReader" return="org.apache.hadoop.mapred.RecordReader"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="genericSplit" type="org.apache.hadoop.mapred.InputSplit"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="reporter" type="org.apache.hadoop.mapred.Reporter"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <doc>
+    <![CDATA[An {@link InputFormat} for plain text files.  Files are broken into lines.
+ Either linefeed or carriage-return are used to signal end of line.  Keys are
+ the position in the file, and values are the line of text..]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.TextInputFormat -->
+  <!-- start class org.apache.hadoop.mapred.TextOutputFormat -->
+  <class name="TextOutputFormat" extends="org.apache.hadoop.mapred.FileOutputFormat"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="TextOutputFormat"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <method name="getRecordWriter" return="org.apache.hadoop.mapred.RecordWriter"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="ignored" type="org.apache.hadoop.fs.FileSystem"/>
+      <param name="job" type="org.apache.hadoop.mapred.JobConf"/>
+      <param name="name" type="java.lang.String"/>
+      <param name="progress" type="org.apache.hadoop.util.Progressable"/>
+      <exception name="IOException" type="java.io.IOException"/>
+    </method>
+    <doc>
+    <![CDATA[An {@link OutputFormat} that writes plain text files.]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.TextOutputFormat -->
+  <!-- start class org.apache.hadoop.mapred.Utils -->
+  <class name="Utils" extends="java.lang.Object"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="Utils"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+    </constructor>
+    <doc>
+    <![CDATA[A utility class. It provides
+   A path filter utility to filter out output/part files in the output dir]]>
+    </doc>
+  </class>
+  <!-- end class org.apache.hadoop.mapred.Utils -->
+</package>
+<package name="org.apache.hadoop.mapred.jobcontrol">
+  <!-- start class org.apache.hadoop.mapred.jobcontrol.Job -->
+  <class name="Job" extends="org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob"
+    abstract="false"
+    static="false" final="false" visibility="public"
+    deprecated="not deprecated">
+    <constructor name="Job" type="org.apache.hadoop.mapred.JobConf, java.util.ArrayList"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+      <doc>
+      <![CDATA[Construct a job.
+ @param jobConf a mapred job configuration representing a job to be executed.
+ @param dependingJobs an array of jobs the current job depends on]]>
+      </doc>
+    </constructor>
+    <constructor name="Job" type="org.apache.hadoop.mapred.JobConf"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <exception name="IOException" type="java.io.IOException"/>
+    </constructor>
+    <method name="getAssignedJobID" return="org.apache.hadoop.mapred.JobID"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[@return the mapred ID of this job as assigned by the mapred framework.]]>
+      </doc>
+    </method>
+    <method name="setAssignedJobID"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="setAssignedJobID should not be called.
+ JOBID is set by the framework.">
+      <param name="mapredJobID" type="org.apache.hadoop.mapred.JobID"/>
+      <doc>
+      <![CDATA[@deprecated setAssignedJobID should not be called.
+ JOBID is set by the framework.]]>
+      </doc>
+    </method>
+    <method name="getJobConf" return="org.apache.hadoop.mapred.JobConf"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[@return the mapred job conf of this job]]>
+      </doc>
+    </method>
+    <method name="setJobConf"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="jobConf" type="org.apache.hadoop.mapred.JobConf"/>
+      <doc>
+      <![CDATA[Set the mapred job conf for this job.
+ @param jobConf the mapred job conf for this job.]]>
+      </doc>
+    </method>
+    <method name="getState" return="int"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[@return the state of this job]]>
+      </doc>
+    </method>
+    <method name="setState"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="protected"
+      deprecated="not deprecated">
+      <param name="state" type="int"/>
+      <doc>
+      <![CDATA[This is a no-op function, Its a behavior change from 1.x We no more can
+ change the state from job
+ 
+ @param state
+          the new state for this job.]]>
+      </doc>
+    </method>
+    <method name="addDependingJob" return="boolean"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="dependingJob" type="org.apache.hadoop.mapred.jobcontrol.Job"/>
+      <doc>
+      <![CDATA[Add a job to this jobs' dependency list. 
+ Dependent jobs can only be added while a Job 
+ is waiting to run, not during or afterwards.
+ 
+ @param dependingJob Job that this Job depends on.
+ @return <tt>true</tt> if the Job was added.]]>
+      </doc>
+    </method>
+    <method name="getJobClient" return="org.apache.hadoop.mapred.JobClient"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[@return the job client of this job]]>
+      </doc>
+    </method>
+    <method name="getDependingJobs" return="java.util.ArrayList"
+      abstract="false" native="false" synchronized="false"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[@return the depending jobs of this job]]>
+      </doc>
+    </method>
+    <method name="getMapredJobID" return="java.lang.String"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <doc>
+      <![CDATA[@return the mapred ID of this job as assigned by the mapred framework.]]>
+      </doc>
+    </method>
+    <method name="setMapredJobID"
+      abstract="false" native="false" synchronized="true"
+      static="false" final="false" visibility="public"
+      deprecated="not deprecated">
+      <param name="mapredJobID" type="java.lang.String"/>
+      <doc>
+      <![CDATA[This is no-op method for backward compatibility. It's a behavior change
+ from 1.x, we can not change job ids from job.
+ 
+ @param mapredJobID
+          the mapred job ID for this job.]]>
+      </doc>
+    </method>
+    <field name="SUCCESS" type="int"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="WAITING" type="int"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="RUNNING" type="int"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="READY" type="int"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
+    <field name="FAILED" type="int"
+      transient="false" volatile="false"
+      static="true" final="true" visibility="public"
+      deprecated="not deprecated">
+    </field>
... 52942 lines suppressed ...


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org