You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by na...@apache.org on 2017/09/03 23:55:17 UTC

[37/77] [abbrv] [partial] hadoop git commit: HADOOP-14364. refresh changelog/release notes with newer Apache Yetus build

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/RELEASENOTES.0.20.0.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/RELEASENOTES.0.20.0.md b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/RELEASENOTES.0.20.0.md
index 4e13959..55f65c0 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/RELEASENOTES.0.20.0.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.0/RELEASENOTES.0.20.0.md
@@ -23,325 +23,325 @@ These release notes cover new developer and user-facing incompatibilities, impor
 
 ---
 
-* [HADOOP-5565](https://issues.apache.org/jira/browse/HADOOP-5565) | *Major* | **The job instrumentation API needs to have a method for finalizeJob,**
+* [HADOOP-4234](https://issues.apache.org/jira/browse/HADOOP-4234) | *Minor* | **KFS: Allow KFS layer to interface with multiple KFS namenodes**
 
-Add finalizeJob & terminateJob methods to JobTrackerInstrumentation class
+Changed KFS glue layer to allow applications to interface with multiple KFS metaservers.
 
 
 ---
 
-* [HADOOP-5548](https://issues.apache.org/jira/browse/HADOOP-5548) | *Blocker* | **Observed negative running maps on the job tracker**
+* [HADOOP-4210](https://issues.apache.org/jira/browse/HADOOP-4210) | *Major* | **Findbugs warnings are printed related to equals implementation of several classes**
 
-Adds synchronization for JobTracker methods in RecoveryManager.
+Changed public class org.apache.hadoop.mapreduce.ID to be an abstract class. Removed from class org.apache.hadoop.mapreduce.ID the methods  public static ID read(DataInput in) and public static ID forName(String str).
 
 
 ---
 
-* [HADOOP-5531](https://issues.apache.org/jira/browse/HADOOP-5531) | *Blocker* | **Remove Chukwa on branch-0.20**
+* [HADOOP-4253](https://issues.apache.org/jira/browse/HADOOP-4253) | *Major* | **Fix warnings generated by FindBugs**
 
-Disabled Chukwa unit tests for 0.20 branch only.
+Removed  from class org.apache.hadoop.fs.RawLocalFileSystem deprecated methods public String getName(), public void lock(Path p, boolean shared) and public void release(Path p).
 
 
 ---
 
-* [HADOOP-5521](https://issues.apache.org/jira/browse/HADOOP-5521) | *Major* | **Remove dependency of testcases on RESTART\_COUNT**
+* [HADOOP-4284](https://issues.apache.org/jira/browse/HADOOP-4284) | *Major* | **Support for user configurable global filters on HttpServer**
 
-This patch makes TestJobHistory and its dependent testcases independent of RESTART\_COUNT.
+Introduced HttpServer method to support global filters.
 
 
 ---
 
-* [HADOOP-5468](https://issues.apache.org/jira/browse/HADOOP-5468) | *Major* | **Change Hadoop doc menu to sub-menus**
+* [HADOOP-4454](https://issues.apache.org/jira/browse/HADOOP-4454) | *Minor* | **Support comments in 'slaves'  file**
 
-Reformatted HTML documentation for Hadoop to use submenus at the left column.
+Changed processing of conf/slaves file to allow # to begin a comment.
 
 
 ---
 
-* [HADOOP-5030](https://issues.apache.org/jira/browse/HADOOP-5030) | *Major* | **Chukwa RPM build improvements**
+* [HADOOP-4572](https://issues.apache.org/jira/browse/HADOOP-4572) | *Major* | **INode and its sub-classes should be package private**
 
-Changed RPM install location to the value specified by build.properties file.
+Moved org.apache.hadoop.hdfs.{CreateEditsLog, NNThroughputBenchmark} to org.apache.hadoop.hdfs.server.namenode.
 
 
 ---
 
-* [HADOOP-4970](https://issues.apache.org/jira/browse/HADOOP-4970) | *Major* | **Use the full path when move files to .Trash/Current**
+* [HADOOP-4575](https://issues.apache.org/jira/browse/HADOOP-4575) | *Major* | **An independent HTTPS proxy for HDFS**
 
-Changed trash facility to use absolute path of the deleted file.
+Introduced independent HSFTP proxy server for authenticated access to clusters.
 
 
 ---
 
-* [HADOOP-4873](https://issues.apache.org/jira/browse/HADOOP-4873) | *Major* | **display minMaps/Reduces on advanced scheduler page**
+* [HADOOP-4618](https://issues.apache.org/jira/browse/HADOOP-4618) | *Major* | **Move http server from FSNamesystem into NameNode.**
 
-Changed fair scheduler UI to display minMaps and minReduces variables.
+Moved HTTP server from FSNameSystem to NameNode. Removed FSNamesystem.getNameNodeInfoPort(). Replaced FSNamesystem.getDFSNameNodeMachine() and FSNamesystem.getDFSNameNodePort() with new method  FSNamesystem.getDFSNameNodeAddress(). Removed constructor NameNode(bindAddress, conf).
 
 
 ---
 
-* [HADOOP-4843](https://issues.apache.org/jira/browse/HADOOP-4843) | *Major* | **Collect Job History log file and Job Conf file into Chukwa**
+* [HADOOP-4567](https://issues.apache.org/jira/browse/HADOOP-4567) | *Major* | **GetFileBlockLocations should return the NetworkTopology information of the machines that hosts those blocks**
 
-Introduced Chuckwa collection of job history.
+Changed GetFileBlockLocations to return topology information for nodes that host the block replicas.
 
 
 ---
 
-* [HADOOP-4827](https://issues.apache.org/jira/browse/HADOOP-4827) | *Major* | **Improve data aggregation in database**
+* [HADOOP-4435](https://issues.apache.org/jira/browse/HADOOP-4435) | *Minor* | **The JobTracker should display the amount of heap memory used**
 
-Improved framework for data aggregation in Chuckwa.
+Changed JobTracker web status page to display the amount of heap memory in use. This changes the JobSubmissionProtocol.
 
 
 ---
 
-* [HADOOP-4826](https://issues.apache.org/jira/browse/HADOOP-4826) | *Major* | **Admin command saveNamespace.**
+* [HADOOP-3923](https://issues.apache.org/jira/browse/HADOOP-3923) | *Minor* | **Deprecate org.apache.hadoop.mapred.StatusHttpServer**
 
-Introduced new dfsadmin command saveNamespace to command the name service to do an immediate save of the file system image.
+Moved class org.apache.hadoop.mapred.StatusHttpServer to org.apache.hadoop.http.HttpServer.
 
 
 ---
 
-* [HADOOP-4789](https://issues.apache.org/jira/browse/HADOOP-4789) | *Minor* | **Change fair scheduler to share between pools by default, not between invidual jobs**
+* [HADOOP-4188](https://issues.apache.org/jira/browse/HADOOP-4188) | *Major* | **Remove Task's dependency on concrete file systems**
 
-Changed fair scheduler to divide resources equally between pools, not jobs.
+Removed Task's dependency on concrete file systems by taking list from FileSystem class. Added statistics table to FileSystem class. Deprecated FileSystem method getStatistics(Class\<? extends FileSystem\> cls).
 
 
 ---
 
-* [HADOOP-4783](https://issues.apache.org/jira/browse/HADOOP-4783) | *Blocker* | **History files are given world readable permissions.**
+* [HADOOP-4661](https://issues.apache.org/jira/browse/HADOOP-4661) | *Major* | **distch: a tool for distributed ch{mod,own}**
 
-Changed history directory permissions to 750 and history file permissions to 740.
+Introduced distch tool for parallel ch{mod, own, grp}.
 
 
 ---
 
-* [HADOOP-4749](https://issues.apache.org/jira/browse/HADOOP-4749) | *Major* | **reducer should output input data size when shuffling is done**
+* [HADOOP-1650](https://issues.apache.org/jira/browse/HADOOP-1650) | *Major* | **Upgrade Jetty to 6.x**
 
-Added a new counter REDUCE\_INPUT\_BYTES.
+Upgraded all core servers to use Jetty 6
 
 
 ---
 
-* [HADOOP-4661](https://issues.apache.org/jira/browse/HADOOP-4661) | *Major* | **distch: a tool for distributed ch{mod,own}**
+* [HADOOP-3986](https://issues.apache.org/jira/browse/HADOOP-3986) | *Major* | **JobClient should not have a static configuration**
 
-Introduced distch tool for parallel ch{mod, own, grp}.
+Removed classes org.apache.hadoop.mapred.JobShell and org.apache.hadoop.mapred.TestJobShell. Removed from JobClient methods static void  setCommandLineConfig(Configuration conf) and public static Configuration getCommandLineConfig().
 
 
 ---
 
-* [HADOOP-4631](https://issues.apache.org/jira/browse/HADOOP-4631) | *Major* | **Split the default configurations into 3 parts**
+* [HADOOP-4422](https://issues.apache.org/jira/browse/HADOOP-4422) | *Major* | **S3 file systems should not create bucket**
 
-Split hadoop-default.xml into core-default.xml, hdfs-default.xml and mapreduce-default.xml.
+Modified Hadoop file system to no longer create S3 buckets. Applications can create buckets for their S3 file systems by other means, for example, using the JetS3t API.
 
 
 ---
 
-* [HADOOP-4618](https://issues.apache.org/jira/browse/HADOOP-4618) | *Major* | **Move http server from FSNamesystem into NameNode.**
+* [HADOOP-3422](https://issues.apache.org/jira/browse/HADOOP-3422) | *Major* | **Ganglia counter metrics are all reported with the metric name "value", so the counter values can not be seen**
 
-Moved HTTP server from FSNameSystem to NameNode. Removed FSNamesystem.getNameNodeInfoPort(). Replaced FSNamesystem.getDFSNameNodeMachine() and FSNamesystem.getDFSNameNodePort() with new method  FSNamesystem.getDFSNameNodeAddress(). Removed constructor NameNode(bindAddress, conf).
+Changed names of ganglia metrics to avoid conflicts and to better identify source function.
 
 
 ---
 
-* [HADOOP-4576](https://issues.apache.org/jira/browse/HADOOP-4576) | *Major* | **Modify pending tasks count in the UI to pending jobs count in the UI**
+* [HADOOP-4035](https://issues.apache.org/jira/browse/HADOOP-4035) | *Blocker* | **Modify the capacity scheduler (HADOOP-3445) to schedule tasks based on memory requirements and task trackers free memory**
 
-Changed capacity scheduler UI to better present number of running and pending tasks.
+Changed capacity scheduler policy to take note of task memory requirements and task tracker memory availability.
 
 
 ---
 
-* [HADOOP-4575](https://issues.apache.org/jira/browse/HADOOP-4575) | *Major* | **An independent HTTPS proxy for HDFS**
+* [HADOOP-3750](https://issues.apache.org/jira/browse/HADOOP-3750) | *Major* | **Fix and enforce module dependencies**
 
-Introduced independent HSFTP proxy server for authenticated access to clusters.
+Removed deprecated method parseArgs from org.apache.hadoop.fs.FileSystem.
 
 
 ---
 
-* [HADOOP-4572](https://issues.apache.org/jira/browse/HADOOP-4572) | *Major* | **INode and its sub-classes should be package private**
+* [HADOOP-3497](https://issues.apache.org/jira/browse/HADOOP-3497) | *Major* | **File globbing with a PathFilter is too restrictive**
 
-Moved org.apache.hadoop.hdfs.{CreateEditsLog, NNThroughputBenchmark} to org.apache.hadoop.hdfs.server.namenode.
+Changed the semantics of file globbing with a PathFilter (using the globStatus method of FileSystem). Previously, the filtering was too restrictive, so that a glob of /\*/\* and a filter that only accepts /a/b would not have matched /a/b. With this change /a/b does match.
 
 
 ---
 
-* [HADOOP-4567](https://issues.apache.org/jira/browse/HADOOP-4567) | *Major* | **GetFileBlockLocations should return the NetworkTopology information of the machines that hosts those blocks**
+* [HADOOP-4576](https://issues.apache.org/jira/browse/HADOOP-4576) | *Major* | **Modify pending tasks count in the UI to pending jobs count in the UI**
 
-Changed GetFileBlockLocations to return topology information for nodes that host the block replicas.
+Changed capacity scheduler UI to better present number of running and pending tasks.
 
 
 ---
 
-* [HADOOP-4565](https://issues.apache.org/jira/browse/HADOOP-4565) | *Major* | **MultiFileInputSplit can use data locality information to create splits**
+* [HADOOP-4305](https://issues.apache.org/jira/browse/HADOOP-4305) | *Major* | **repeatedly blacklisted tasktrackers should get declared dead**
 
-Improved MultiFileInputFormat so that multiple blocks from the same node or same rack can be combined into a single split.
+Improved TaskTracker blacklisting strategy to better exclude faulty tracker from executing tasks.
 
 
 ---
 
-* [HADOOP-4454](https://issues.apache.org/jira/browse/HADOOP-4454) | *Minor* | **Support comments in 'slaves'  file**
+* [HADOOP-4445](https://issues.apache.org/jira/browse/HADOOP-4445) | *Major* | **Wrong number of running map/reduce tasks are displayed in queue information.**
 
-Changed processing of conf/slaves file to allow # to begin a comment.
+Changed JobTracker UI to better present the number of active tasks.
 
 
 ---
 
-* [HADOOP-4445](https://issues.apache.org/jira/browse/HADOOP-4445) | *Major* | **Wrong number of running map/reduce tasks are displayed in queue information.**
+* [HADOOP-4179](https://issues.apache.org/jira/browse/HADOOP-4179) | *Major* | **Hadoop-Vaidya : Rule based performance diagnostic tool for Map/Reduce jobs**
 
-Changed JobTracker UI to better present the number of active tasks.
+Introduced Vaidya rule based performance diagnostic tool for Map/Reduce jobs.
 
 
 ---
 
-* [HADOOP-4435](https://issues.apache.org/jira/browse/HADOOP-4435) | *Minor* | **The JobTracker should display the amount of heap memory used**
+* [HADOOP-4029](https://issues.apache.org/jira/browse/HADOOP-4029) | *Major* | **NameNode should report status and performance for each replica of image and log**
 
-Changed JobTracker web status page to display the amount of heap memory in use. This changes the JobSubmissionProtocol.
+Added name node storage information to the dfshealth page, and moved data node information to a separated page.
 
 
 ---
 
-* [HADOOP-4422](https://issues.apache.org/jira/browse/HADOOP-4422) | *Major* | **S3 file systems should not create bucket**
+* [HADOOP-4749](https://issues.apache.org/jira/browse/HADOOP-4749) | *Major* | **reducer should output input data size when shuffling is done**
 
-Modified Hadoop file system to no longer create S3 buckets. Applications can create buckets for their S3 file systems by other means, for example, using the JetS3t API.
+Added a new counter REDUCE\_INPUT\_BYTES.
 
 
 ---
 
-* [HADOOP-4374](https://issues.apache.org/jira/browse/HADOOP-4374) | *Major* | **JVM should not be killed but given an opportunity to exit gracefully**
+* [HADOOP-4826](https://issues.apache.org/jira/browse/HADOOP-4826) | *Major* | **Admin command saveNamespace.**
 
-This patch (1) Adds a shutdownHook that does syncLogs sothat logs of the current task are flushed and log.index is up to date in cases like System.exit(), or killed using signals(other than SIGKILL).
-(2) Changes writeToIndexFile() to write to a temporary index file first and then rename to log.index sothat updates to log.index file are atomic.
+Introduced new dfsadmin command saveNamespace to command the name service to do an immediate save of the file system image.
 
 
 ---
 
-* [HADOOP-4305](https://issues.apache.org/jira/browse/HADOOP-4305) | *Major* | **repeatedly blacklisted tasktrackers should get declared dead**
+* [HADOOP-3063](https://issues.apache.org/jira/browse/HADOOP-3063) | *Major* | **BloomMapFile - fail-fast version of MapFile for sparsely populated key space**
 
-Improved TaskTracker blacklisting strategy to better exclude faulty tracker from executing tasks.
+Introduced BloomMapFile subclass of MapFile that creates a Bloom filter from all keys.
 
 
 ---
 
-* [HADOOP-4284](https://issues.apache.org/jira/browse/HADOOP-4284) | *Major* | **Support for user configurable global filters on HttpServer**
+* [HADOOP-1230](https://issues.apache.org/jira/browse/HADOOP-1230) | *Major* | **Replace parameters with context objects in Mapper, Reducer, Partitioner, InputFormat, and OutputFormat classes**
 
-Introduced HttpServer method to support global filters.
+Replaced parameters with context obejcts in Mapper, Reducer, Partitioner, InputFormat, and OutputFormat classes.
 
 
 ---
 
-* [HADOOP-4253](https://issues.apache.org/jira/browse/HADOOP-4253) | *Major* | **Fix warnings generated by FindBugs**
+* [HADOOP-4631](https://issues.apache.org/jira/browse/HADOOP-4631) | *Major* | **Split the default configurations into 3 parts**
 
-Removed  from class org.apache.hadoop.fs.RawLocalFileSystem deprecated methods public String getName(), public void lock(Path p, boolean shared) and public void release(Path p).
+Split hadoop-default.xml into core-default.xml, hdfs-default.xml and mapreduce-default.xml.
 
 
 ---
 
-* [HADOOP-4234](https://issues.apache.org/jira/browse/HADOOP-4234) | *Minor* | **KFS: Allow KFS layer to interface with multiple KFS namenodes**
+* [HADOOP-3344](https://issues.apache.org/jira/browse/HADOOP-3344) | *Major* | **libhdfs: always builds 32bit, even when x86\_64 Java used**
 
-Changed KFS glue layer to allow applications to interface with multiple KFS metaservers.
+Changed build procedure for libhdfs to build correctly for different platforms. Build instructions are in the Jira item.
 
 
 ---
 
-* [HADOOP-4210](https://issues.apache.org/jira/browse/HADOOP-4210) | *Major* | **Findbugs warnings are printed related to equals implementation of several classes**
+* [HADOOP-4827](https://issues.apache.org/jira/browse/HADOOP-4827) | *Major* | **Improve data aggregation in database**
 
-Changed public class org.apache.hadoop.mapreduce.ID to be an abstract class. Removed from class org.apache.hadoop.mapreduce.ID the methods  public static ID read(DataInput in) and public static ID forName(String str).
+Improved framework for data aggregation in Chuckwa.
 
 
 ---
 
-* [HADOOP-4188](https://issues.apache.org/jira/browse/HADOOP-4188) | *Major* | **Remove Task's dependency on concrete file systems**
+* [HADOOP-4789](https://issues.apache.org/jira/browse/HADOOP-4789) | *Minor* | **Change fair scheduler to share between pools by default, not between invidual jobs**
 
-Removed Task's dependency on concrete file systems by taking list from FileSystem class. Added statistics table to FileSystem class. Deprecated FileSystem method getStatistics(Class\<? extends FileSystem\> cls).
+Changed fair scheduler to divide resources equally between pools, not jobs.
 
 
 ---
 
-* [HADOOP-4179](https://issues.apache.org/jira/browse/HADOOP-4179) | *Major* | **Hadoop-Vaidya : Rule based performance diagnostic tool for Map/Reduce jobs**
+* [HADOOP-4843](https://issues.apache.org/jira/browse/HADOOP-4843) | *Major* | **Collect Job History log file and Job Conf file into Chukwa**
 
-Introduced Vaidya rule based performance diagnostic tool for Map/Reduce jobs.
+Introduced Chuckwa collection of job history.
 
 
 ---
 
-* [HADOOP-4103](https://issues.apache.org/jira/browse/HADOOP-4103) | *Major* | **Alert for missing blocks**
+* [HADOOP-5030](https://issues.apache.org/jira/browse/HADOOP-5030) | *Major* | **Chukwa RPM build improvements**
 
-Modified dfsadmin -report to report under replicated blocks. blocks with corrupt replicas, and missing blocks".
+Changed RPM install location to the value specified by build.properties file.
 
 
 ---
 
-* [HADOOP-4035](https://issues.apache.org/jira/browse/HADOOP-4035) | *Blocker* | **Modify the capacity scheduler (HADOOP-3445) to schedule tasks based on memory requirements and task trackers free memory**
+* [HADOOP-4970](https://issues.apache.org/jira/browse/HADOOP-4970) | *Major* | **Use the full path when move files to .Trash/Current**
 
-Changed capacity scheduler policy to take note of task memory requirements and task tracker memory availability.
+Changed trash facility to use absolute path of the deleted file.
 
 
 ---
 
-* [HADOOP-4029](https://issues.apache.org/jira/browse/HADOOP-4029) | *Major* | **NameNode should report status and performance for each replica of image and log**
+* [HADOOP-4565](https://issues.apache.org/jira/browse/HADOOP-4565) | *Major* | **MultiFileInputSplit can use data locality information to create splits**
 
-Added name node storage information to the dfshealth page, and moved data node information to a separated page.
+Improved MultiFileInputFormat so that multiple blocks from the same node or same rack can be combined into a single split.
 
 
 ---
 
-* [HADOOP-3986](https://issues.apache.org/jira/browse/HADOOP-3986) | *Major* | **JobClient should not have a static configuration**
+* [HADOOP-4873](https://issues.apache.org/jira/browse/HADOOP-4873) | *Major* | **display minMaps/Reduces on advanced scheduler page**
 
-Removed classes org.apache.hadoop.mapred.JobShell and org.apache.hadoop.mapred.TestJobShell. Removed from JobClient methods static void  setCommandLineConfig(Configuration conf) and public static Configuration getCommandLineConfig().
+Changed fair scheduler UI to display minMaps and minReduces variables.
 
 
 ---
 
-* [HADOOP-3923](https://issues.apache.org/jira/browse/HADOOP-3923) | *Minor* | **Deprecate org.apache.hadoop.mapred.StatusHttpServer**
+* [HADOOP-4103](https://issues.apache.org/jira/browse/HADOOP-4103) | *Major* | **Alert for missing blocks**
 
-Moved class org.apache.hadoop.mapred.StatusHttpServer to org.apache.hadoop.http.HttpServer.
+Modified dfsadmin -report to report under replicated blocks. blocks with corrupt replicas, and missing blocks".
 
 
 ---
 
-* [HADOOP-3750](https://issues.apache.org/jira/browse/HADOOP-3750) | *Major* | **Fix and enforce module dependencies**
+* [HADOOP-4783](https://issues.apache.org/jira/browse/HADOOP-4783) | *Blocker* | **History files are given world readable permissions.**
 
-Removed deprecated method parseArgs from org.apache.hadoop.fs.FileSystem.
+Changed history directory permissions to 750 and history file permissions to 740.
 
 
 ---
 
-* [HADOOP-3497](https://issues.apache.org/jira/browse/HADOOP-3497) | *Major* | **File globbing with a PathFilter is too restrictive**
+* [HADOOP-5521](https://issues.apache.org/jira/browse/HADOOP-5521) | *Major* | **Remove dependency of testcases on RESTART\_COUNT**
 
-Changed the semantics of file globbing with a PathFilter (using the globStatus method of FileSystem). Previously, the filtering was too restrictive, so that a glob of /\*/\* and a filter that only accepts /a/b would not have matched /a/b. With this change /a/b does match.
+This patch makes TestJobHistory and its dependent testcases independent of RESTART\_COUNT.
 
 
 ---
 
-* [HADOOP-3422](https://issues.apache.org/jira/browse/HADOOP-3422) | *Major* | **Ganglia counter metrics are all reported with the metric name "value", so the counter values can not be seen**
+* [HADOOP-5468](https://issues.apache.org/jira/browse/HADOOP-5468) | *Major* | **Change Hadoop doc menu to sub-menus**
 
-Changed names of ganglia metrics to avoid conflicts and to better identify source function.
+Reformatted HTML documentation for Hadoop to use submenus at the left column.
 
 
 ---
 
-* [HADOOP-3344](https://issues.apache.org/jira/browse/HADOOP-3344) | *Major* | **libhdfs: always builds 32bit, even when x86\_64 Java used**
+* [HADOOP-5531](https://issues.apache.org/jira/browse/HADOOP-5531) | *Blocker* | **Remove Chukwa on branch-0.20**
 
-Changed build procedure for libhdfs to build correctly for different platforms. Build instructions are in the Jira item.
+Disabled Chukwa unit tests for 0.20 branch only.
 
 
 ---
 
-* [HADOOP-3063](https://issues.apache.org/jira/browse/HADOOP-3063) | *Major* | **BloomMapFile - fail-fast version of MapFile for sparsely populated key space**
+* [HADOOP-5565](https://issues.apache.org/jira/browse/HADOOP-5565) | *Major* | **The job instrumentation API needs to have a method for finalizeJob,**
 
-Introduced BloomMapFile subclass of MapFile that creates a Bloom filter from all keys.
+Add finalizeJob & terminateJob methods to JobTrackerInstrumentation class
 
 
 ---
 
-* [HADOOP-1650](https://issues.apache.org/jira/browse/HADOOP-1650) | *Major* | **Upgrade Jetty to 6.x**
+* [HADOOP-4374](https://issues.apache.org/jira/browse/HADOOP-4374) | *Major* | **JVM should not be killed but given an opportunity to exit gracefully**
 
-Upgraded all core servers to use Jetty 6
+This patch (1) Adds a shutdownHook that does syncLogs sothat logs of the current task are flushed and log.index is up to date in cases like System.exit(), or killed using signals(other than SIGKILL).
+(2) Changes writeToIndexFile() to write to a temporary index file first and then rename to log.index sothat updates to log.index file are atomic.
 
 
 ---
 
-* [HADOOP-1230](https://issues.apache.org/jira/browse/HADOOP-1230) | *Major* | **Replace parameters with context objects in Mapper, Reducer, Partitioner, InputFormat, and OutputFormat classes**
+* [HADOOP-5548](https://issues.apache.org/jira/browse/HADOOP-5548) | *Blocker* | **Observed negative running maps on the job tracker**
 
-Replaced parameters with context obejcts in Mapper, Reducer, Partitioner, InputFormat, and OutputFormat classes.
+Adds synchronization for JobTracker methods in RecoveryManager.
 
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.1/CHANGES.0.20.1.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.1/CHANGES.0.20.1.md b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.1/CHANGES.0.20.1.md
index 45ca0d7..ceccdf5 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.1/CHANGES.0.20.1.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.1/CHANGES.0.20.1.md
@@ -24,110 +24,98 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
-| [HADOOP-5881](https://issues.apache.org/jira/browse/HADOOP-5881) | Simplify configuration related to task-memory-monitoring and memory-based scheduling |  Major | . | Vinod Kumar Vavilapalli | Vinod Kumar Vavilapalli |
 | [HADOOP-5726](https://issues.apache.org/jira/browse/HADOOP-5726) | Remove pre-emption from the capacity scheduler code base |  Major | . | Hemanth Yamijala | rahul k singh |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-5881](https://issues.apache.org/jira/browse/HADOOP-5881) | Simplify configuration related to task-memory-monitoring and memory-based scheduling |  Major | . | Vinod Kumar Vavilapalli | Vinod Kumar Vavilapalli |
 
 
 ### NEW FEATURES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
-| [HADOOP-6080](https://issues.apache.org/jira/browse/HADOOP-6080) | Handling of  Trash with quota |  Major | fs | Koji Noguchi | Jakob Homan |
 | [HADOOP-5714](https://issues.apache.org/jira/browse/HADOOP-5714) | Metric to show number of fs.exists (or number of getFileInfo) calls |  Minor | metrics | Koji Noguchi | Jakob Homan |
 | [HADOOP-3315](https://issues.apache.org/jira/browse/HADOOP-3315) | New binary file format |  Major | io | Owen O'Malley | Hong Tang |
+| [HADOOP-6080](https://issues.apache.org/jira/browse/HADOOP-6080) | Handling of  Trash with quota |  Major | fs | Koji Noguchi | Jakob Homan |
 
 
 ### IMPROVEMENTS:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
-| [HDFS-635](https://issues.apache.org/jira/browse/HDFS-635) | HDFS Project page does not show 0.20.1 documentation/release information. |  Major | documentation | Andy Sautins |  |
+| [MAPREDUCE-465](https://issues.apache.org/jira/browse/MAPREDUCE-465) | Deprecate org.apache.hadoop.mapred.lib.MultithreadedMapRunner |  Minor | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
 | [HDFS-527](https://issues.apache.org/jira/browse/HDFS-527) | Refactor DFSClient constructors |  Major | hdfs-client | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
 | [MAPREDUCE-767](https://issues.apache.org/jira/browse/MAPREDUCE-767) | to remove mapreduce dependency on commons-cli2 |  Major | contrib/streaming | Giridharan Kesavan | Amar Kamat |
-| [MAPREDUCE-465](https://issues.apache.org/jira/browse/MAPREDUCE-465) | Deprecate org.apache.hadoop.mapred.lib.MultithreadedMapRunner |  Minor | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
+| [HDFS-635](https://issues.apache.org/jira/browse/HDFS-635) | HDFS Project page does not show 0.20.1 documentation/release information. |  Major | documentation | Andy Sautins |  |
 
 
 ### BUG FIXES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
-| [HADOOP-6215](https://issues.apache.org/jira/browse/HADOOP-6215) | fix GenericOptionParser to deal with -D with '=' in the value |  Major | . | Owen O'Malley | Amar Kamat |
-| [HADOOP-6145](https://issues.apache.org/jira/browse/HADOOP-6145) | No error message for deleting non-existant file or directory. |  Major | fs | Suman Sehgal | Jakob Homan |
-| [HADOOP-6141](https://issues.apache.org/jira/browse/HADOOP-6141) | hadoop 0.20 branch "test-patch" is broken |  Major | build | Hong Tang | Hong Tang |
-| [HADOOP-6139](https://issues.apache.org/jira/browse/HADOOP-6139) | Incomplete help message is displayed for rm and rmr options. |  Minor | . | Suman Sehgal | Jakob Homan |
-| [HADOOP-6017](https://issues.apache.org/jira/browse/HADOOP-6017) | NameNode and SecondaryNameNode fail to restart because of abnormal filenames. |  Blocker | . | Raghu Angadi | Tsz Wo Nicholas Sze |
-| [HADOOP-5951](https://issues.apache.org/jira/browse/HADOOP-5951) | StorageInfo needs Apache license header. |  Major | . | Suresh Srinivas | Suresh Srinivas |
-| [HADOOP-5937](https://issues.apache.org/jira/browse/HADOOP-5937) | Correct  info message  "Use hadoop dfs -safemode option"  to  " Use hdfs dfsadmin -safemode option"  . |  Minor | . | Ravi Phulari | Ravi Phulari |
-| [HADOOP-5932](https://issues.apache.org/jira/browse/HADOOP-5932) | MemoryMatcher logs 0 as freeMemOnTT even though there are free slots available on TaskTraker |  Major | . | Karam Singh | Vinod Kumar Vavilapalli |
-| [HADOOP-5924](https://issues.apache.org/jira/browse/HADOOP-5924) | JT fails to recover the jobs after restart after HADOOP:4372 |  Major | . | Ramya Sunil | Amar Kamat |
-| [HADOOP-5921](https://issues.apache.org/jira/browse/HADOOP-5921) | JobTracker does not come up because of NotReplicatedYetException |  Major | . | Amareshwari Sriramadasu | Amar Kamat |
-| [HADOOP-5920](https://issues.apache.org/jira/browse/HADOOP-5920) | TestJobHistory fails some times. |  Major | . | Amareshwari Sriramadasu | Amar Kamat |
-| [HADOOP-5908](https://issues.apache.org/jira/browse/HADOOP-5908) | ArithmeticException in heartbeats with zero map jobs |  Major | . | Vinod Kumar Vavilapalli | Amar Kamat |
-| [HADOOP-5884](https://issues.apache.org/jira/browse/HADOOP-5884) | Capacity scheduler should account high memory jobs as using more capacity of the queue |  Major | . | Hemanth Yamijala | Vinod Kumar Vavilapalli |
-| [HADOOP-5883](https://issues.apache.org/jira/browse/HADOOP-5883) | TaskMemoryMonitorThread might shoot down tasks even if their processes momentarily exceed the requested memory |  Major | . | Hemanth Yamijala | Hemanth Yamijala |
-| [HADOOP-5882](https://issues.apache.org/jira/browse/HADOOP-5882) | Progress is not updated when the New Reducer is running reduce phase |  Blocker | . | Jothi Padmanabhan | Amareshwari Sriramadasu |
-| [HADOOP-5850](https://issues.apache.org/jira/browse/HADOOP-5850) | map/reduce doesn't run jobs with 0 maps |  Critical | . | Owen O'Malley | Vinod Kumar Vavilapalli |
-| [HADOOP-5828](https://issues.apache.org/jira/browse/HADOOP-5828) | Use absolute path for JobTracker's mapred.local.dir in MiniMRCluster |  Major | test | Hemanth Yamijala | Hemanth Yamijala |
-| [HADOOP-5746](https://issues.apache.org/jira/browse/HADOOP-5746) | Errors encountered in MROutputThread after the last map/reduce call can go undetected |  Major | . | Devaraj Das | Amar Kamat |
-| [HADOOP-5736](https://issues.apache.org/jira/browse/HADOOP-5736) | Update CapacityScheduler documentation to reflect latest changes |  Major | . | Sreekanth Ramakrishnan | Sreekanth Ramakrishnan |
-| [HADOOP-5719](https://issues.apache.org/jira/browse/HADOOP-5719) | Jobs failed during job initalization are never removed from Capacity Schedulers waiting list |  Major | . | Sreekanth Ramakrishnan | Sreekanth Ramakrishnan |
-| [HADOOP-5718](https://issues.apache.org/jira/browse/HADOOP-5718) | Capacity Scheduler should not check for presence of default queue while starting up. |  Major | . | Sreekanth Ramakrishnan | Sreekanth Ramakrishnan |
-| [HADOOP-5711](https://issues.apache.org/jira/browse/HADOOP-5711) | Change Namenode file close log to info |  Minor | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
-| [HADOOP-5691](https://issues.apache.org/jira/browse/HADOOP-5691) | org.apache.hadoop.mapreduce.Reducer should not be abstract. |  Major | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
-| [HADOOP-5688](https://issues.apache.org/jira/browse/HADOOP-5688) | HftpFileSystem.getChecksum(..) does not work for the paths with scheme and authority |  Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
-| [HADOOP-5655](https://issues.apache.org/jira/browse/HADOOP-5655) | TestMRServerPorts fails on java.net.BindException |  Major | . | Hairong Kuang | Devaraj Das |
+| [HADOOP-5210](https://issues.apache.org/jira/browse/HADOOP-5210) | Reduce Task Progress shows \> 100% when the total size of map outputs (for a single reducer) is high |  Minor | . | Jothi Padmanabhan | Ravi Gummadi |
 | [HADOOP-5654](https://issues.apache.org/jira/browse/HADOOP-5654) | TestReplicationPolicy.\<init\> fails on java.net.BindException |  Major | test | Hairong Kuang | Hairong Kuang |
-| [HADOOP-5648](https://issues.apache.org/jira/browse/HADOOP-5648) | Not able to generate gridmix.jar on already compiled version of hadoop |  Major | benchmarks | Suman Sehgal | Giridharan Kesavan |
-| [HADOOP-5646](https://issues.apache.org/jira/browse/HADOOP-5646) | TestQueueCapacities is failing Hudson tests for the last few builds |  Major | . | Jothi Padmanabhan | Vinod Kumar Vavilapalli |
-| [HADOOP-5641](https://issues.apache.org/jira/browse/HADOOP-5641) | Possible NPE in CapacityScheduler's MemoryMatcher |  Major | . | Vinod Kumar Vavilapalli | Hemanth Yamijala |
-| [HADOOP-5636](https://issues.apache.org/jira/browse/HADOOP-5636) | Job is left in Running state after a killJob |  Critical | . | Amareshwari Sriramadasu | Amar Kamat |
-| [HADOOP-5539](https://issues.apache.org/jira/browse/HADOOP-5539) | o.a.h.mapred.Merger not maintaining map out compression on intermediate files |  Blocker | . | Billy Pearson | Jothi Padmanabhan |
+| [HADOOP-5655](https://issues.apache.org/jira/browse/HADOOP-5655) | TestMRServerPorts fails on java.net.BindException |  Major | . | Hairong Kuang | Devaraj Das |
 | [HADOOP-5533](https://issues.apache.org/jira/browse/HADOOP-5533) | Recovery duration shown on the jobtracker webpage is inaccurate |  Major | . | Amar Kamat | Amar Kamat |
-| [HADOOP-5349](https://issues.apache.org/jira/browse/HADOOP-5349) | When the size required for a path is -1, LocalDirAllocator.getLocalPathForWrite fails with a DiskCheckerException when the disk it selects is bad. |  Major | . | Vinod Kumar Vavilapalli | Vinod Kumar Vavilapalli |
-| [HADOOP-5213](https://issues.apache.org/jira/browse/HADOOP-5213) | BZip2CompressionOutputStream NullPointerException |  Blocker | io | Zheng Shao | Zheng Shao |
-| [HADOOP-5210](https://issues.apache.org/jira/browse/HADOOP-5210) | Reduce Task Progress shows \> 100% when the total size of map outputs (for a single reducer) is high |  Minor | . | Jothi Padmanabhan | Ravi Gummadi |
+| [HADOOP-5646](https://issues.apache.org/jira/browse/HADOOP-5646) | TestQueueCapacities is failing Hudson tests for the last few builds |  Major | . | Jothi Padmanabhan | Vinod Kumar Vavilapalli |
+| [HADOOP-5691](https://issues.apache.org/jira/browse/HADOOP-5691) | org.apache.hadoop.mapreduce.Reducer should not be abstract. |  Major | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
+| [HADOOP-5688](https://issues.apache.org/jira/browse/HADOOP-5688) | HftpFileSystem.getChecksum(..) does not work for the paths with scheme and authority |  Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
 | [HADOOP-4674](https://issues.apache.org/jira/browse/HADOOP-4674) | hadoop fs -help should list detailed help info for the following commands: test, text, tail, stat & touchz |  Trivial | fs | David NeSmith | Ravi Phulari |
+| [HADOOP-5711](https://issues.apache.org/jira/browse/HADOOP-5711) | Change Namenode file close log to info |  Minor | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
+| [HADOOP-5213](https://issues.apache.org/jira/browse/HADOOP-5213) | BZip2CompressionOutputStream NullPointerException |  Blocker | io | Zheng Shao | Zheng Shao |
+| [HADOOP-5736](https://issues.apache.org/jira/browse/HADOOP-5736) | Update CapacityScheduler documentation to reflect latest changes |  Major | . | Sreekanth Ramakrishnan | Sreekanth Ramakrishnan |
+| [HADOOP-5718](https://issues.apache.org/jira/browse/HADOOP-5718) | Capacity Scheduler should not check for presence of default queue while starting up. |  Major | . | Sreekanth Ramakrishnan | Sreekanth Ramakrishnan |
+| [HADOOP-5719](https://issues.apache.org/jira/browse/HADOOP-5719) | Jobs failed during job initalization are never removed from Capacity Schedulers waiting list |  Major | . | Sreekanth Ramakrishnan | Sreekanth Ramakrishnan |
+| [HADOOP-5349](https://issues.apache.org/jira/browse/HADOOP-5349) | When the size required for a path is -1, LocalDirAllocator.getLocalPathForWrite fails with a DiskCheckerException when the disk it selects is bad. |  Major | . | Vinod Kumar Vavilapalli | Vinod Kumar Vavilapalli |
+| [HADOOP-5636](https://issues.apache.org/jira/browse/HADOOP-5636) | Job is left in Running state after a killJob |  Critical | . | Amareshwari Sriramadasu | Amar Kamat |
+| [HADOOP-5641](https://issues.apache.org/jira/browse/HADOOP-5641) | Possible NPE in CapacityScheduler's MemoryMatcher |  Major | . | Vinod Kumar Vavilapalli | Hemanth Yamijala |
+| [HADOOP-5828](https://issues.apache.org/jira/browse/HADOOP-5828) | Use absolute path for JobTracker's mapred.local.dir in MiniMRCluster |  Major | test | Hemanth Yamijala | Hemanth Yamijala |
+| [HADOOP-5850](https://issues.apache.org/jira/browse/HADOOP-5850) | map/reduce doesn't run jobs with 0 maps |  Critical | . | Owen O'Malley | Vinod Kumar Vavilapalli |
 | [HADOOP-4626](https://issues.apache.org/jira/browse/HADOOP-4626) | API link in forrest doc should point to the same version of hadoop. |  Minor | documentation | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
-| [HDFS-1022](https://issues.apache.org/jira/browse/HDFS-1022) | Merge under-10-min tests specs into one file |  Major | test | Erik Steffl | Erik Steffl |
-| [HDFS-525](https://issues.apache.org/jira/browse/HDFS-525) | ListPathsServlet.java uses static SimpleDateFormat that has threading issues |  Major | namenode | Suresh Srinivas | Suresh Srinivas |
-| [HDFS-438](https://issues.apache.org/jira/browse/HDFS-438) | Improve help message for quotas |  Minor | . | Raghu Angadi | Raghu Angadi |
-| [HDFS-167](https://issues.apache.org/jira/browse/HDFS-167) | DFSClient continues to retry indefinitely |  Minor | hdfs-client | Derek Wollenstein | Bill Zeller |
+| [HADOOP-5883](https://issues.apache.org/jira/browse/HADOOP-5883) | TaskMemoryMonitorThread might shoot down tasks even if their processes momentarily exceed the requested memory |  Major | . | Hemanth Yamijala | Hemanth Yamijala |
+| [HADOOP-5539](https://issues.apache.org/jira/browse/HADOOP-5539) | o.a.h.mapred.Merger not maintaining map out compression on intermediate files |  Blocker | . | Billy Pearson | Jothi Padmanabhan |
+| [HADOOP-5932](https://issues.apache.org/jira/browse/HADOOP-5932) | MemoryMatcher logs 0 as freeMemOnTT even though there are free slots available on TaskTraker |  Major | . | Karam Singh | Vinod Kumar Vavilapalli |
+| [HADOOP-5951](https://issues.apache.org/jira/browse/HADOOP-5951) | StorageInfo needs Apache license header. |  Major | . | Suresh Srinivas | Suresh Srinivas |
+| [HADOOP-5648](https://issues.apache.org/jira/browse/HADOOP-5648) | Not able to generate gridmix.jar on already compiled version of hadoop |  Major | benchmarks | Suman Sehgal | Giridharan Kesavan |
+| [HADOOP-5908](https://issues.apache.org/jira/browse/HADOOP-5908) | ArithmeticException in heartbeats with zero map jobs |  Major | . | Vinod Kumar Vavilapalli | Amar Kamat |
+| [HADOOP-5924](https://issues.apache.org/jira/browse/HADOOP-5924) | JT fails to recover the jobs after restart after HADOOP:4372 |  Major | . | Ramya Sunil | Amar Kamat |
+| [HADOOP-5882](https://issues.apache.org/jira/browse/HADOOP-5882) | Progress is not updated when the New Reducer is running reduce phase |  Blocker | . | Jothi Padmanabhan | Amareshwari Sriramadasu |
+| [HADOOP-5746](https://issues.apache.org/jira/browse/HADOOP-5746) | Errors encountered in MROutputThread after the last map/reduce call can go undetected |  Major | . | Devaraj Das | Amar Kamat |
+| [HADOOP-5884](https://issues.apache.org/jira/browse/HADOOP-5884) | Capacity scheduler should account high memory jobs as using more capacity of the queue |  Major | . | Hemanth Yamijala | Vinod Kumar Vavilapalli |
+| [HADOOP-5937](https://issues.apache.org/jira/browse/HADOOP-5937) | Correct  info message  "Use hadoop dfs -safemode option"  to  " Use hdfs dfsadmin -safemode option"  . |  Minor | . | Ravi Phulari | Ravi Phulari |
+| [HADOOP-5921](https://issues.apache.org/jira/browse/HADOOP-5921) | JobTracker does not come up because of NotReplicatedYetException |  Major | . | Amareshwari Sriramadasu | Amar Kamat |
+| [HADOOP-6017](https://issues.apache.org/jira/browse/HADOOP-6017) | NameNode and SecondaryNameNode fail to restart because of abnormal filenames. |  Blocker | . | Raghu Angadi | Tsz Wo Nicholas Sze |
+| [HADOOP-5920](https://issues.apache.org/jira/browse/HADOOP-5920) | TestJobHistory fails some times. |  Major | . | Amareshwari Sriramadasu | Amar Kamat |
 | [HDFS-26](https://issues.apache.org/jira/browse/HDFS-26) |  	 HADOOP-5862 for version .20  (Namespace quota exceeded message unclear) |  Major | . | Boris Shkolnik | Boris Shkolnik |
-| [MAPREDUCE-924](https://issues.apache.org/jira/browse/MAPREDUCE-924) | TestPipes must not directly invoke 'main' of pipes as an exit from main could cause the testcase to crash. |  Major | pipes | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
-| [MAPREDUCE-911](https://issues.apache.org/jira/browse/MAPREDUCE-911) | TestTaskFail fail sometimes |  Major | test | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
+| [HDFS-438](https://issues.apache.org/jira/browse/HDFS-438) | Improve help message for quotas |  Minor | . | Raghu Angadi | Raghu Angadi |
+| [MAPREDUCE-2](https://issues.apache.org/jira/browse/MAPREDUCE-2) | ArrayOutOfIndex error in KeyFieldBasedPartitioner on empty key |  Major | . | Amar Kamat | Amar Kamat |
+| [MAPREDUCE-130](https://issues.apache.org/jira/browse/MAPREDUCE-130) | Delete the jobconf copy from the log directory of the JobTracker when the job is retired |  Major | . | Devaraj Das | Amar Kamat |
+| [MAPREDUCE-657](https://issues.apache.org/jira/browse/MAPREDUCE-657) | CompletedJobStatusStore hardcodes filesystem to hdfs |  Major | jobtracker | Amar Kamat | Amar Kamat |
+| [MAPREDUCE-179](https://issues.apache.org/jira/browse/MAPREDUCE-179) | setProgress not called for new RecordReaders |  Blocker | . | Chris Douglas | Chris Douglas |
+| [MAPREDUCE-124](https://issues.apache.org/jira/browse/MAPREDUCE-124) | When abortTask of OutputCommitter fails with an Exception for a map-only job, the task is marked as success |  Major | . | Jothi Padmanabhan | Amareshwari Sriramadasu |
+| [HADOOP-6139](https://issues.apache.org/jira/browse/HADOOP-6139) | Incomplete help message is displayed for rm and rmr options. |  Minor | . | Suman Sehgal | Jakob Homan |
+| [HADOOP-6141](https://issues.apache.org/jira/browse/HADOOP-6141) | hadoop 0.20 branch "test-patch" is broken |  Major | build | Hong Tang | Hong Tang |
+| [HADOOP-6145](https://issues.apache.org/jira/browse/HADOOP-6145) | No error message for deleting non-existant file or directory. |  Major | fs | Suman Sehgal | Jakob Homan |
+| [MAPREDUCE-565](https://issues.apache.org/jira/browse/MAPREDUCE-565) | Partitioner does not work with new API |  Blocker | task | Jothi Padmanabhan | Owen O'Malley |
+| [MAPREDUCE-18](https://issues.apache.org/jira/browse/MAPREDUCE-18) | Under load the shuffle sometimes gets incorrect data |  Blocker | . | Owen O'Malley | Ravi Gummadi |
+| [MAPREDUCE-735](https://issues.apache.org/jira/browse/MAPREDUCE-735) | ArrayIndexOutOfBoundsException is thrown by KeyFieldBasedPartitioner |  Major | . | Suman Sehgal | Amar Kamat |
+| [MAPREDUCE-383](https://issues.apache.org/jira/browse/MAPREDUCE-383) | pipes combiner does not reset properly after a spill |  Major | . | Christian Kunz | Christian Kunz |
+| [MAPREDUCE-40](https://issues.apache.org/jira/browse/MAPREDUCE-40) | Memory management variables need a backwards compatibility option after HADOOP-5881 |  Blocker | . | Hemanth Yamijala | rahul k singh |
+| [MAPREDUCE-796](https://issues.apache.org/jira/browse/MAPREDUCE-796) | Encountered "ClassCastException" on tasktracker while running wordcount with MultithreadedMapRunner |  Major | examples | Suman Sehgal | Amar Kamat |
 | [MAPREDUCE-838](https://issues.apache.org/jira/browse/MAPREDUCE-838) | Task succeeds even when committer.commitTask fails with IOException |  Blocker | task | Koji Noguchi | Amareshwari Sriramadasu |
-| [MAPREDUCE-834](https://issues.apache.org/jira/browse/MAPREDUCE-834) | When TaskTracker config use old memory management values its memory monitoring is diabled. |  Major | . | Karam Singh | Sreekanth Ramakrishnan |
+| [MAPREDUCE-805](https://issues.apache.org/jira/browse/MAPREDUCE-805) | Deadlock in Jobtracker |  Major | . | Michael Tamm | Amar Kamat |
+| [HDFS-167](https://issues.apache.org/jira/browse/HDFS-167) | DFSClient continues to retry indefinitely |  Minor | hdfs-client | Derek Wollenstein | Bill Zeller |
 | [MAPREDUCE-832](https://issues.apache.org/jira/browse/MAPREDUCE-832) | Too many WARN messages about deprecated memorty config variables in JobTacker log |  Major | . | Karam Singh | rahul k singh |
+| [MAPREDUCE-745](https://issues.apache.org/jira/browse/MAPREDUCE-745) | TestRecoveryManager fails sometimes |  Major | jobtracker | Amareshwari Sriramadasu | Amar Kamat |
+| [MAPREDUCE-834](https://issues.apache.org/jira/browse/MAPREDUCE-834) | When TaskTracker config use old memory management values its memory monitoring is diabled. |  Major | . | Karam Singh | Sreekanth Ramakrishnan |
 | [MAPREDUCE-818](https://issues.apache.org/jira/browse/MAPREDUCE-818) | org.apache.hadoop.mapreduce.Counters.getGroup returns null if the group name doesnt exist. |  Minor | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
 | [MAPREDUCE-807](https://issues.apache.org/jira/browse/MAPREDUCE-807) | Stray user files in mapred.system.dir with permissions other than 777 can prevent the jobtracker from starting up. |  Blocker | jobtracker | Amar Kamat | Amar Kamat |
-| [MAPREDUCE-805](https://issues.apache.org/jira/browse/MAPREDUCE-805) | Deadlock in Jobtracker |  Major | . | Michael Tamm | Amar Kamat |
-| [MAPREDUCE-796](https://issues.apache.org/jira/browse/MAPREDUCE-796) | Encountered "ClassCastException" on tasktracker while running wordcount with MultithreadedMapRunner |  Major | examples | Suman Sehgal | Amar Kamat |
-| [MAPREDUCE-745](https://issues.apache.org/jira/browse/MAPREDUCE-745) | TestRecoveryManager fails sometimes |  Major | jobtracker | Amareshwari Sriramadasu | Amar Kamat |
-| [MAPREDUCE-735](https://issues.apache.org/jira/browse/MAPREDUCE-735) | ArrayIndexOutOfBoundsException is thrown by KeyFieldBasedPartitioner |  Major | . | Suman Sehgal | Amar Kamat |
-| [MAPREDUCE-687](https://issues.apache.org/jira/browse/MAPREDUCE-687) | TestMiniMRMapRedDebugScript fails sometimes |  Major | test | Amar Kamat | Amareshwari Sriramadasu |
-| [MAPREDUCE-657](https://issues.apache.org/jira/browse/MAPREDUCE-657) | CompletedJobStatusStore hardcodes filesystem to hdfs |  Major | jobtracker | Amar Kamat | Amar Kamat |
-| [MAPREDUCE-565](https://issues.apache.org/jira/browse/MAPREDUCE-565) | Partitioner does not work with new API |  Blocker | task | Jothi Padmanabhan | Owen O'Malley |
 | [MAPREDUCE-430](https://issues.apache.org/jira/browse/MAPREDUCE-430) | Task stuck in cleanup with OutOfMemoryErrors |  Major | . | Amareshwari Sriramadasu | Amar Kamat |
+| [HADOOP-6215](https://issues.apache.org/jira/browse/HADOOP-6215) | fix GenericOptionParser to deal with -D with '=' in the value |  Major | . | Owen O'Malley | Amar Kamat |
 | [MAPREDUCE-421](https://issues.apache.org/jira/browse/MAPREDUCE-421) | mapred pipes might return exit code 0 even when failing |  Major | pipes | Christian Kunz | Christian Kunz |
-| [MAPREDUCE-383](https://issues.apache.org/jira/browse/MAPREDUCE-383) | pipes combiner does not reset properly after a spill |  Major | . | Christian Kunz | Christian Kunz |
-| [MAPREDUCE-179](https://issues.apache.org/jira/browse/MAPREDUCE-179) | setProgress not called for new RecordReaders |  Blocker | . | Chris Douglas | Chris Douglas |
-| [MAPREDUCE-130](https://issues.apache.org/jira/browse/MAPREDUCE-130) | Delete the jobconf copy from the log directory of the JobTracker when the job is retired |  Major | . | Devaraj Das | Amar Kamat |
-| [MAPREDUCE-124](https://issues.apache.org/jira/browse/MAPREDUCE-124) | When abortTask of OutputCommitter fails with an Exception for a map-only job, the task is marked as success |  Major | . | Jothi Padmanabhan | Amareshwari Sriramadasu |
-| [MAPREDUCE-40](https://issues.apache.org/jira/browse/MAPREDUCE-40) | Memory management variables need a backwards compatibility option after HADOOP-5881 |  Blocker | . | Hemanth Yamijala | rahul k singh |
-| [MAPREDUCE-18](https://issues.apache.org/jira/browse/MAPREDUCE-18) | Under load the shuffle sometimes gets incorrect data |  Blocker | . | Owen O'Malley | Ravi Gummadi |
-| [MAPREDUCE-2](https://issues.apache.org/jira/browse/MAPREDUCE-2) | ArrayOutOfIndex error in KeyFieldBasedPartitioner on empty key |  Major | . | Amar Kamat | Amar Kamat |
-
-
-### TESTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HDFS-525](https://issues.apache.org/jira/browse/HDFS-525) | ListPathsServlet.java uses static SimpleDateFormat that has threading issues |  Major | namenode | Suresh Srinivas | Suresh Srinivas |
+| [MAPREDUCE-911](https://issues.apache.org/jira/browse/MAPREDUCE-911) | TestTaskFail fail sometimes |  Major | test | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
+| [MAPREDUCE-687](https://issues.apache.org/jira/browse/MAPREDUCE-687) | TestMiniMRMapRedDebugScript fails sometimes |  Major | test | Amar Kamat | Amareshwari Sriramadasu |
+| [MAPREDUCE-924](https://issues.apache.org/jira/browse/MAPREDUCE-924) | TestPipes must not directly invoke 'main' of pipes as an exit from main could cause the testcase to crash. |  Major | pipes | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
+| [HDFS-1022](https://issues.apache.org/jira/browse/HDFS-1022) | Merge under-10-min tests specs into one file |  Major | test | Erik Steffl | Erik Steffl |
 
 
 ### SUB-TASKS:
@@ -137,9 +125,3 @@
 | [HADOOP-6213](https://issues.apache.org/jira/browse/HADOOP-6213) | Remove commons dependency on commons-cli2 |  Blocker | util | Amar Kamat | Amar Kamat |
 
 
-### OTHER:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.1/RELEASENOTES.0.20.1.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.1/RELEASENOTES.0.20.1.md b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.1/RELEASENOTES.0.20.1.md
index 953c100..cbc9762 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.1/RELEASENOTES.0.20.1.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.1/RELEASENOTES.0.20.1.md
@@ -23,37 +23,37 @@ These release notes cover new developer and user-facing incompatibilities, impor
 
 ---
 
-* [HADOOP-6213](https://issues.apache.org/jira/browse/HADOOP-6213) | *Blocker* | **Remove commons dependency on commons-cli2**
+* [HADOOP-5210](https://issues.apache.org/jira/browse/HADOOP-5210) | *Minor* | **Reduce Task Progress shows \> 100% when the total size of map outputs (for a single reducer) is high**
 
-GenericOptionsParser in branch 0.20 depends on commons-cli2. This jira removes the dependency of branch 0.20 on commons-cli2 completely. The problem is seen after 'ant binary' where all the library files are copied to '$hadoop-home/lib' which already has commons-cli2.
+This patch resets the variable totalBytesProcessed before the final merge sothat it will be used for calculating the progress of reducePhase(the 3rd phase of reduce task) correctly.
 
 
 ---
 
-* [HADOOP-6080](https://issues.apache.org/jira/browse/HADOOP-6080) | *Major* | **Handling of  Trash with quota**
+* [HADOOP-5726](https://issues.apache.org/jira/browse/HADOOP-5726) | *Major* | **Remove pre-emption from the capacity scheduler code base**
 
-Provide a new option to rm and rmr, -skipTrash, which will immediately delete the files specified, rather than moving them to the trash.
+Removed pre-emption from capacity scheduler. The impact of this change is that capacities for queues can no longer be guaranteed within a given span of time. Also changed configuration variables to remove pre-emption related variables and better reflect the absence of guarantees.
 
 
 ---
 
-* [HADOOP-5924](https://issues.apache.org/jira/browse/HADOOP-5924) | *Major* | **JT fails to recover the jobs after restart after HADOOP:4372**
+* [HADOOP-5881](https://issues.apache.org/jira/browse/HADOOP-5881) | *Major* | **Simplify configuration related to task-memory-monitoring and memory-based scheduling**
 
-Post HADOOP-4372, empty job history files caused NPE. This issues fixes that by creating new files if no old file is found.
+**WARNING: No release note provided for this change.**
 
 
 ---
 
-* [HADOOP-5921](https://issues.apache.org/jira/browse/HADOOP-5921) | *Major* | **JobTracker does not come up because of NotReplicatedYetException**
+* [HADOOP-5924](https://issues.apache.org/jira/browse/HADOOP-5924) | *Major* | **JT fails to recover the jobs after restart after HADOOP:4372**
 
-Jobtracker crashes if it fails to create jobtracker.info file (i.e if sufficient datanodes are not up). With this patch it keeps on retrying on IOExceptions assuming IOExceptions in jobtracker.info creation implies that the hdfs is not in \*ready \*state.
+Post HADOOP-4372, empty job history files caused NPE. This issues fixes that by creating new files if no old file is found.
 
 
 ---
 
-* [HADOOP-5920](https://issues.apache.org/jira/browse/HADOOP-5920) | *Major* | **TestJobHistory fails some times.**
+* [HADOOP-5746](https://issues.apache.org/jira/browse/HADOOP-5746) | *Major* | **Errors encountered in MROutputThread after the last map/reduce call can go undetected**
 
-TestJobHistory fails as jobtracker is restarted very fast (within a minute) and history files from earlier testcases were not cleaned up. This patch cleans up the history-dir and mapred-system-dir after every test.
+If the child (streaming) process returns successfully and the MROutputThread throws an error, there was no way to detect that as all the IOExceptions was ignored. Such issues can occur when DFS clients were closed etc. Now a check for errors (in threads) is made before finishing off the task and an exception is thrown that fails he task.
 
 
 ---
@@ -66,153 +66,153 @@ take more slots proportionally with respect to a slot's default memory size.
 
 ---
 
-* [HADOOP-5881](https://issues.apache.org/jira/browse/HADOOP-5881) | *Major* | **Simplify configuration related to task-memory-monitoring and memory-based scheduling**
+* [HADOOP-5921](https://issues.apache.org/jira/browse/HADOOP-5921) | *Major* | **JobTracker does not come up because of NotReplicatedYetException**
 
-**WARNING: No release note provided for this incompatible change.**
+Jobtracker crashes if it fails to create jobtracker.info file (i.e if sufficient datanodes are not up). With this patch it keeps on retrying on IOExceptions assuming IOExceptions in jobtracker.info creation implies that the hdfs is not in \*ready \*state.
 
 
 ---
 
-* [HADOOP-5746](https://issues.apache.org/jira/browse/HADOOP-5746) | *Major* | **Errors encountered in MROutputThread after the last map/reduce call can go undetected**
+* [HADOOP-5920](https://issues.apache.org/jira/browse/HADOOP-5920) | *Major* | **TestJobHistory fails some times.**
 
-If the child (streaming) process returns successfully and the MROutputThread throws an error, there was no way to detect that as all the IOExceptions was ignored. Such issues can occur when DFS clients were closed etc. Now a check for errors (in threads) is made before finishing off the task and an exception is thrown that fails he task.
+TestJobHistory fails as jobtracker is restarted very fast (within a minute) and history files from earlier testcases were not cleaned up. This patch cleans up the history-dir and mapred-system-dir after every test.
 
 
 ---
 
-* [HADOOP-5726](https://issues.apache.org/jira/browse/HADOOP-5726) | *Major* | **Remove pre-emption from the capacity scheduler code base**
+* [HADOOP-3315](https://issues.apache.org/jira/browse/HADOOP-3315) | *Major* | **New binary file format**
 
-Removed pre-emption from capacity scheduler. The impact of this change is that capacities for queues can no longer be guaranteed within a given span of time. Also changed configuration variables to remove pre-emption related variables and better reflect the absence of guarantees.
+Add a new, binary file format TFile.
 
 
 ---
 
-* [HADOOP-5210](https://issues.apache.org/jira/browse/HADOOP-5210) | *Minor* | **Reduce Task Progress shows \> 100% when the total size of map outputs (for a single reducer) is high**
+* [MAPREDUCE-2](https://issues.apache.org/jira/browse/MAPREDUCE-2) | *Major* | **ArrayOutOfIndex error in KeyFieldBasedPartitioner on empty key**
 
-This patch resets the variable totalBytesProcessed before the final merge sothat it will be used for calculating the progress of reducePhase(the 3rd phase of reduce task) correctly.
+KeyFieldBasedPartitioner throws ArrayOutOfIndex when passed an empty key. This patch hashes empty key to 0 hashcode.
 
 
 ---
 
-* [HADOOP-3315](https://issues.apache.org/jira/browse/HADOOP-3315) | *Major* | **New binary file format**
+* [MAPREDUCE-130](https://issues.apache.org/jira/browse/MAPREDUCE-130) | *Major* | **Delete the jobconf copy from the log directory of the JobTracker when the job is retired**
 
-Add a new, binary file format TFile.
+When a job is initialized, it localizes the job conf to the logs dir. Without this patch I never gets deleted. Now when the job retires, the conf is deleted. This local copy is required to display on the webui.
 
 
 ---
 
-* [MAPREDUCE-838](https://issues.apache.org/jira/browse/MAPREDUCE-838) | *Blocker* | **Task succeeds even when committer.commitTask fails with IOException**
+* [MAPREDUCE-657](https://issues.apache.org/jira/browse/MAPREDUCE-657) | *Major* | **CompletedJobStatusStore hardcodes filesystem to hdfs**
 
-Fixed a bug in the way commit of task outputs happens. The bug was that if commit fails with IOException, the task would be declared as successful.
+CompletedJobStatusStore was hardcored to persist to hdfs. This patch allows to persist to local fs. Just qualify mapred.job.tracker.persist.jobstatus.dir with file://
 
 
 ---
 
-* [MAPREDUCE-834](https://issues.apache.org/jira/browse/MAPREDUCE-834) | *Major* | **When TaskTracker config use old memory management values its memory monitoring is diabled.**
+* [HADOOP-6080](https://issues.apache.org/jira/browse/HADOOP-6080) | *Major* | **Handling of  Trash with quota**
 
-The tasktracker's startup code was modified to use deprecated memory management configuration variables, when specified, and enable memory monitoring of tasks.
+Provide a new option to rm and rmr, -skipTrash, which will immediately delete the files specified, rather than moving them to the trash.
 
 
 ---
 
-* [MAPREDUCE-832](https://issues.apache.org/jira/browse/MAPREDUCE-832) | *Major* | **Too many WARN messages about deprecated memorty config variables in JobTacker log**
+* [MAPREDUCE-18](https://issues.apache.org/jira/browse/MAPREDUCE-18) | *Blocker* | **Under load the shuffle sometimes gets incorrect data**
 
-Reduced the frequency of log messages printed when a deprecated memory management variable is found in configuration of a job.
+This patch adds the mapid and reduceid in the http header of mapoutput when being sent to reduce node. Also validates compressed length, decompressed length, mapid and reduceid from http header at reduce node.
 
 
 ---
 
-* [MAPREDUCE-818](https://issues.apache.org/jira/browse/MAPREDUCE-818) | *Minor* | **org.apache.hadoop.mapreduce.Counters.getGroup returns null if the group name doesnt exist.**
+* [MAPREDUCE-383](https://issues.apache.org/jira/browse/MAPREDUCE-383) | *Major* | **pipes combiner does not reset properly after a spill**
 
-Fixed a bug in the new org.apache.hadoop.mapreduce.Counters.getGroup() method to return an empty group if group name doesn't exist, instead of null, thus making sure that it is in sync with the Javadoc.
+Fixed a bug in Pipes combiner to reset the spilled bytes count after the spill.
 
 
 ---
 
-* [MAPREDUCE-807](https://issues.apache.org/jira/browse/MAPREDUCE-807) | *Blocker* | **Stray user files in mapred.system.dir with permissions other than 777 can prevent the jobtracker from starting up.**
+* [MAPREDUCE-40](https://issues.apache.org/jira/browse/MAPREDUCE-40) | *Blocker* | **Memory management variables need a backwards compatibility option after HADOOP-5881**
 
-The JobTracker tries to delete the mapred.system.dir when it is starting up (with the job recovery disabled). The fix provided by this jira is that JobTracker will fail (bail out) with AccessControlException if it fails to delete files/directories in mapred.system.dir due to access control issues.
+Fixed backwards compatibility by re-introducing and deprecating removed memory monitoring related configuration options.
 
 
 ---
 
-* [MAPREDUCE-805](https://issues.apache.org/jira/browse/MAPREDUCE-805) | *Major* | **Deadlock in Jobtracker**
-
-Job initialization process was changed to not change (run) states during initialization. The reason is two fold
-- this can lead to deadlock as state changes require circular locking (i.e JobInProgress requires JobTracker lock)
-- events were not raised as these state changes were not informed/propogated back to the JobTracker
+* [MAPREDUCE-796](https://issues.apache.org/jira/browse/MAPREDUCE-796) | *Major* | **Encountered "ClassCastException" on tasktracker while running wordcount with MultithreadedMapRunner**
 
-Now the JobTracker takes care of initializing/failing/killing the job and raising appropriate events. The simple rule that was enforced was that "The JobTracker lock is \*must\* before changing the run-state of a job".
+Multithreaded mapper was modified to create a new Runtime exception (object) from a throwable instead of casting a throwable into a RuntimeException, once the Multithreaded map encounters a fault.
 
 
 ---
 
-* [MAPREDUCE-796](https://issues.apache.org/jira/browse/MAPREDUCE-796) | *Major* | **Encountered "ClassCastException" on tasktracker while running wordcount with MultithreadedMapRunner**
+* [MAPREDUCE-838](https://issues.apache.org/jira/browse/MAPREDUCE-838) | *Blocker* | **Task succeeds even when committer.commitTask fails with IOException**
 
-Multithreaded mapper was modified to create a new Runtime exception (object) from a throwable instead of casting a throwable into a RuntimeException, once the Multithreaded map encounters a fault.
+Fixed a bug in the way commit of task outputs happens. The bug was that if commit fails with IOException, the task would be declared as successful.
 
 
 ---
 
-* [MAPREDUCE-767](https://issues.apache.org/jira/browse/MAPREDUCE-767) | *Major* | **to remove mapreduce dependency on commons-cli2**
+* [MAPREDUCE-805](https://issues.apache.org/jira/browse/MAPREDUCE-805) | *Major* | **Deadlock in Jobtracker**
 
-Removes the dependency of hadoop-mapred from commons-cli2 and uses commons-cli1.2 for command-line parsing.
+Job initialization process was changed to not change (run) states during initialization. The reason is two fold
+- this can lead to deadlock as state changes require circular locking (i.e JobInProgress requires JobTracker lock)
+- events were not raised as these state changes were not informed/propogated back to the JobTracker
+
+Now the JobTracker takes care of initializing/failing/killing the job and raising appropriate events. The simple rule that was enforced was that "The JobTracker lock is \*must\* before changing the run-state of a job".
 
 
 ---
 
-* [MAPREDUCE-745](https://issues.apache.org/jira/browse/MAPREDUCE-745) | *Major* | **TestRecoveryManager fails sometimes**
+* [MAPREDUCE-832](https://issues.apache.org/jira/browse/MAPREDUCE-832) | *Major* | **Too many WARN messages about deprecated memorty config variables in JobTacker log**
 
-JobTracker was changed to take an identifier as an argument. This helps in testcases where the jobtracker/mapred-cluster is (re)started in a short span of time and the chances of jobtracker identifier clashing are high. Also the RecoveryManager was modified to throw an exception if a job fails in init during the recovery process. The reason being that this event will trigger a job failure in the recovery process and will remove the failed job from further initialization and processing.
+Reduced the frequency of log messages printed when a deprecated memory management variable is found in configuration of a job.
 
 
 ---
 
-* [MAPREDUCE-657](https://issues.apache.org/jira/browse/MAPREDUCE-657) | *Major* | **CompletedJobStatusStore hardcodes filesystem to hdfs**
+* [MAPREDUCE-745](https://issues.apache.org/jira/browse/MAPREDUCE-745) | *Major* | **TestRecoveryManager fails sometimes**
 
-CompletedJobStatusStore was hardcored to persist to hdfs. This patch allows to persist to local fs. Just qualify mapred.job.tracker.persist.jobstatus.dir with file://
+JobTracker was changed to take an identifier as an argument. This helps in testcases where the jobtracker/mapred-cluster is (re)started in a short span of time and the chances of jobtracker identifier clashing are high. Also the RecoveryManager was modified to throw an exception if a job fails in init during the recovery process. The reason being that this event will trigger a job failure in the recovery process and will remove the failed job from further initialization and processing.
 
 
 ---
 
-* [MAPREDUCE-430](https://issues.apache.org/jira/browse/MAPREDUCE-430) | *Major* | **Task stuck in cleanup with OutOfMemoryErrors**
+* [MAPREDUCE-834](https://issues.apache.org/jira/browse/MAPREDUCE-834) | *Major* | **When TaskTracker config use old memory management values its memory monitoring is diabled.**
 
-Various code paths in the framework caught Throwable and tried to do inline cleanup. In case of OOM errors, such inline-cleanups can result into hung jvms. With this fix, the TaskTracker provides a api to report fatal errors (any throwable other than FSErrror and Exceptions). On catching a Throwable, Mapper/Reducer tries to inform the TT.
+The tasktracker's startup code was modified to use deprecated memory management configuration variables, when specified, and enable memory monitoring of tasks.
 
 
 ---
 
-* [MAPREDUCE-383](https://issues.apache.org/jira/browse/MAPREDUCE-383) | *Major* | **pipes combiner does not reset properly after a spill**
+* [MAPREDUCE-818](https://issues.apache.org/jira/browse/MAPREDUCE-818) | *Minor* | **org.apache.hadoop.mapreduce.Counters.getGroup returns null if the group name doesnt exist.**
 
-Fixed a bug in Pipes combiner to reset the spilled bytes count after the spill.
+Fixed a bug in the new org.apache.hadoop.mapreduce.Counters.getGroup() method to return an empty group if group name doesn't exist, instead of null, thus making sure that it is in sync with the Javadoc.
 
 
 ---
 
-* [MAPREDUCE-130](https://issues.apache.org/jira/browse/MAPREDUCE-130) | *Major* | **Delete the jobconf copy from the log directory of the JobTracker when the job is retired**
+* [MAPREDUCE-807](https://issues.apache.org/jira/browse/MAPREDUCE-807) | *Blocker* | **Stray user files in mapred.system.dir with permissions other than 777 can prevent the jobtracker from starting up.**
 
-When a job is initialized, it localizes the job conf to the logs dir. Without this patch I never gets deleted. Now when the job retires, the conf is deleted. This local copy is required to display on the webui.
+The JobTracker tries to delete the mapred.system.dir when it is starting up (with the job recovery disabled). The fix provided by this jira is that JobTracker will fail (bail out) with AccessControlException if it fails to delete files/directories in mapred.system.dir due to access control issues.
 
 
 ---
 
-* [MAPREDUCE-40](https://issues.apache.org/jira/browse/MAPREDUCE-40) | *Blocker* | **Memory management variables need a backwards compatibility option after HADOOP-5881**
+* [MAPREDUCE-767](https://issues.apache.org/jira/browse/MAPREDUCE-767) | *Major* | **to remove mapreduce dependency on commons-cli2**
 
-Fixed backwards compatibility by re-introducing and deprecating removed memory monitoring related configuration options.
+Removes the dependency of hadoop-mapred from commons-cli2 and uses commons-cli1.2 for command-line parsing.
 
 
 ---
 
-* [MAPREDUCE-18](https://issues.apache.org/jira/browse/MAPREDUCE-18) | *Blocker* | **Under load the shuffle sometimes gets incorrect data**
+* [HADOOP-6213](https://issues.apache.org/jira/browse/HADOOP-6213) | *Blocker* | **Remove commons dependency on commons-cli2**
 
-This patch adds the mapid and reduceid in the http header of mapoutput when being sent to reduce node. Also validates compressed length, decompressed length, mapid and reduceid from http header at reduce node.
+GenericOptionsParser in branch 0.20 depends on commons-cli2. This jira removes the dependency of branch 0.20 on commons-cli2 completely. The problem is seen after 'ant binary' where all the library files are copied to '$hadoop-home/lib' which already has commons-cli2.
 
 
 ---
 
-* [MAPREDUCE-2](https://issues.apache.org/jira/browse/MAPREDUCE-2) | *Major* | **ArrayOutOfIndex error in KeyFieldBasedPartitioner on empty key**
+* [MAPREDUCE-430](https://issues.apache.org/jira/browse/MAPREDUCE-430) | *Major* | **Task stuck in cleanup with OutOfMemoryErrors**
 
-KeyFieldBasedPartitioner throws ArrayOutOfIndex when passed an empty key. This patch hashes empty key to 0 hashcode.
+Various code paths in the framework caught Throwable and tried to do inline cleanup. In case of OOM errors, such inline-cleanups can result into hung jvms. With this fix, the TaskTracker provides a api to report fatal errors (any throwable other than FSErrror and Exceptions). On catching a Throwable, Mapper/Reducer tries to inform the TT.
 
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.2/CHANGES.0.20.2.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.2/CHANGES.0.20.2.md b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.2/CHANGES.0.20.2.md
index 3ca5bdb..6a5151e 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.2/CHANGES.0.20.2.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.20.2/CHANGES.0.20.2.md
@@ -27,70 +27,58 @@
 | [HDFS-793](https://issues.apache.org/jira/browse/HDFS-793) | DataNode should first receive the whole packet ack message before it constructs and sends its own ack message for the packet |  Blocker | datanode | Hairong Kuang | Hairong Kuang |
 
 
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### NEW FEATURES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
 ### IMPROVEMENTS:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
-| [HADOOP-1849](https://issues.apache.org/jira/browse/HADOOP-1849) | IPC server max queue size should be configurable |  Major | ipc | Raghu Angadi | Konstantin Shvachko |
 | [MAPREDUCE-623](https://issues.apache.org/jira/browse/MAPREDUCE-623) | Resolve javac warnings in mapred |  Major | build | Jothi Padmanabhan | Jothi Padmanabhan |
+| [HADOOP-1849](https://issues.apache.org/jira/browse/HADOOP-1849) | IPC server max queue size should be configurable |  Major | ipc | Raghu Angadi | Konstantin Shvachko |
 
 
 ### BUG FIXES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
-| [HADOOP-6576](https://issues.apache.org/jira/browse/HADOOP-6576) | TestStreamingStatus is failing on 0.20 branch |  Major | . | Chris Douglas | Todd Lipcon |
-| [HADOOP-6575](https://issues.apache.org/jira/browse/HADOOP-6575) | Tests do not run on 0.20 branch |  Major | . | Chris Douglas | Chris Douglas |
-| [HADOOP-6524](https://issues.apache.org/jira/browse/HADOOP-6524) | Contrib tests are failing Clover'ed build |  Major | build | Konstantin Boudnik | Konstantin Boudnik |
-| [HADOOP-6506](https://issues.apache.org/jira/browse/HADOOP-6506) | Failing tests prevent the rest of test targets from execution. |  Major | build | Konstantin Boudnik | Konstantin Boudnik |
-| [HADOOP-6498](https://issues.apache.org/jira/browse/HADOOP-6498) | IPC client  bug may cause rpc call hang |  Blocker | ipc | Ruyue Ma | Ruyue Ma |
-| [HADOOP-6460](https://issues.apache.org/jira/browse/HADOOP-6460) | Namenode runs of out of memory due to memory leak in ipc Server |  Blocker | . | Suresh Srinivas | Suresh Srinivas |
-| [HADOOP-6428](https://issues.apache.org/jira/browse/HADOOP-6428) | HttpServer sleeps with negative values |  Major | . | Tsz Wo Nicholas Sze | Konstantin Boudnik |
-| [HADOOP-6315](https://issues.apache.org/jira/browse/HADOOP-6315) | GzipCodec should not represent BuiltInZlibInflater as decompressorType |  Major | io | Aaron Kimball | Aaron Kimball |
-| [HADOOP-6269](https://issues.apache.org/jira/browse/HADOOP-6269) | Missing synchronization for defaultResources in Configuration.addResource |  Major | conf | Todd Lipcon | Sreekanth Ramakrishnan |
+| [HADOOP-5759](https://issues.apache.org/jira/browse/HADOOP-5759) | IllegalArgumentException when CombineFileInputFormat is used as job InputFormat |  Major | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
+| [MAPREDUCE-826](https://issues.apache.org/jira/browse/MAPREDUCE-826) | harchive doesn't use ToolRunner / harchive returns 0 even if the job fails with exception |  Trivial | harchive | Koji Noguchi | Koji Noguchi |
+| [MAPREDUCE-112](https://issues.apache.org/jira/browse/MAPREDUCE-112) | Reduce Input Records and Reduce Output Records counters are not being set when using the new Mapreduce reducer API |  Blocker | . | Jothi Padmanabhan | Jothi Padmanabhan |
 | [HADOOP-6231](https://issues.apache.org/jira/browse/HADOOP-6231) | Allow caching of filesystem instances to be disabled on a per-instance basis |  Major | fs | Tom White | Ben Slusky |
+| [MAPREDUCE-979](https://issues.apache.org/jira/browse/MAPREDUCE-979) | JobConf.getMemoryFor{Map\|Reduce}Task doesn't fallback to newer config knobs when mapred.taskmaxvmem is set to DISABLED\_MEMORY\_LIMIT of -1 |  Blocker | jobtracker, tasktracker | Arun C Murthy | Sreekanth Ramakrishnan |
+| [HDFS-677](https://issues.apache.org/jira/browse/HDFS-677) | Rename failure due to quota results in deletion of src directory |  Blocker | namenode | Suresh Srinivas | Suresh Srinivas |
+| [HDFS-579](https://issues.apache.org/jira/browse/HDFS-579) | HADOOP-3792 update of DfsTask incomplete |  Major | hdfs-client | Christian Kunz | Christian Kunz |
+| [MAPREDUCE-1070](https://issues.apache.org/jira/browse/MAPREDUCE-1070) | Deadlock in FairSchedulerServlet |  Major | . | Todd Lipcon | Todd Lipcon |
 | [HADOOP-6097](https://issues.apache.org/jira/browse/HADOOP-6097) | Multiple bugs w/ Hadoop archives |  Major | fs | Ben Slusky | Ben Slusky |
-| [HADOOP-5759](https://issues.apache.org/jira/browse/HADOOP-5759) | IllegalArgumentException when CombineFileInputFormat is used as job InputFormat |  Major | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
-| [HADOOP-5623](https://issues.apache.org/jira/browse/HADOOP-5623) | Streaming: process provided status messages are overwritten every 10 seoncds |  Major | . | Rick Cox | Rick Cox |
-| [HADOOP-5612](https://issues.apache.org/jira/browse/HADOOP-5612) | Some c++ scripts are not chmodded before ant execution |  Major | build | Todd Lipcon | Todd Lipcon |
-| [HADOOP-5611](https://issues.apache.org/jira/browse/HADOOP-5611) | C++ libraries do not build on Debian Lenny |  Critical | . | Todd Lipcon | Todd Lipcon |
-| [HDFS-927](https://issues.apache.org/jira/browse/HDFS-927) | DFSInputStream retries too many times for new block locations |  Critical | hdfs-client | Todd Lipcon | Todd Lipcon |
-| [HDFS-872](https://issues.apache.org/jira/browse/HDFS-872) | DFSClient 0.20.1 is incompatible with HDFS 0.20.2 |  Major | datanode, hdfs-client | Bassam Tabbara | Todd Lipcon |
-| [HDFS-781](https://issues.apache.org/jira/browse/HDFS-781) | Metrics PendingDeletionBlocks is not decremented |  Blocker | namenode | Suresh Srinivas | Suresh Srinivas |
-| [HDFS-761](https://issues.apache.org/jira/browse/HDFS-761) | Failure to process rename operation from edits log due to quota verification |  Major | namenode | Suresh Srinivas | Suresh Srinivas |
-| [HDFS-745](https://issues.apache.org/jira/browse/HDFS-745) | TestFsck timeout on 0.20. |  Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
-| [HDFS-732](https://issues.apache.org/jira/browse/HDFS-732) | HDFS files are ending up truncated |  Blocker | hdfs-client | Christian Kunz | Tsz Wo Nicholas Sze |
 | [HDFS-723](https://issues.apache.org/jira/browse/HDFS-723) | Deadlock in DFSClient#DFSOutputStream |  Blocker | . | Hairong Kuang | Hairong Kuang |
-| [HDFS-677](https://issues.apache.org/jira/browse/HDFS-677) | Rename failure due to quota results in deletion of src directory |  Blocker | namenode | Suresh Srinivas | Suresh Srinivas |
+| [HDFS-732](https://issues.apache.org/jira/browse/HDFS-732) | HDFS files are ending up truncated |  Blocker | hdfs-client | Christian Kunz | Tsz Wo Nicholas Sze |
+| [MAPREDUCE-1163](https://issues.apache.org/jira/browse/MAPREDUCE-1163) | hdfsJniHelper.h: Yahoo! specific paths are encoded |  Trivial | . | Allen Wittenauer | Allen Wittenauer |
+| [HDFS-761](https://issues.apache.org/jira/browse/HDFS-761) | Failure to process rename operation from edits log due to quota verification |  Major | namenode | Suresh Srinivas | Suresh Srinivas |
+| [MAPREDUCE-1068](https://issues.apache.org/jira/browse/MAPREDUCE-1068) | In hadoop-0.20.0 streaming job do not throw proper verbose error message if file is not present |  Major | contrib/streaming | Peeyush Bishnoi | Amareshwari Sriramadasu |
 | [HDFS-596](https://issues.apache.org/jira/browse/HDFS-596) | Memory leak in libhdfs: hdfsFreeFileInfo() in libhdfs does not free memory for mOwner and mGroup |  Blocker | fuse-dfs | Zhang Bingjun | Zhang Bingjun |
-| [HDFS-579](https://issues.apache.org/jira/browse/HDFS-579) | HADOOP-3792 update of DfsTask incomplete |  Major | hdfs-client | Christian Kunz | Christian Kunz |
-| [HDFS-187](https://issues.apache.org/jira/browse/HDFS-187) | TestStartup fails if hdfs is running in the same machine |  Major | test | Tsz Wo Nicholas Sze | Todd Lipcon |
+| [MAPREDUCE-1147](https://issues.apache.org/jira/browse/MAPREDUCE-1147) | Map output records counter missing for map-only jobs in new API |  Blocker | . | Chris Douglas | Amar Kamat |
+| [HADOOP-6269](https://issues.apache.org/jira/browse/HADOOP-6269) | Missing synchronization for defaultResources in Configuration.addResource |  Major | conf | Todd Lipcon | Sreekanth Ramakrishnan |
+| [MAPREDUCE-1182](https://issues.apache.org/jira/browse/MAPREDUCE-1182) | Reducers fail with OutOfMemoryError while copying Map outputs |  Blocker | . | Chandra Prakash Bhagtani | Chandra Prakash Bhagtani |
+| [HDFS-781](https://issues.apache.org/jira/browse/HDFS-781) | Metrics PendingDeletionBlocks is not decremented |  Blocker | namenode | Suresh Srinivas | Suresh Srinivas |
 | [HDFS-185](https://issues.apache.org/jira/browse/HDFS-185) | Chown , chgrp , chmod operations allowed when namenode is in safemode . |  Major | . | Ravi Phulari | Ravi Phulari |
+| [HADOOP-6428](https://issues.apache.org/jira/browse/HADOOP-6428) | HttpServer sleeps with negative values |  Major | . | Tsz Wo Nicholas Sze | Konstantin Boudnik |
 | [HDFS-101](https://issues.apache.org/jira/browse/HDFS-101) | DFS write pipeline : DFSClient sometimes does not detect second datanode failure |  Blocker | datanode | Raghu Angadi | Hairong Kuang |
-| [MAPREDUCE-1251](https://issues.apache.org/jira/browse/MAPREDUCE-1251) | c++ utils doesn't compile |  Major | . | Eli Collins | Eli Collins |
-| [MAPREDUCE-1182](https://issues.apache.org/jira/browse/MAPREDUCE-1182) | Reducers fail with OutOfMemoryError while copying Map outputs |  Blocker | . | Chandra Prakash Bhagtani | Chandra Prakash Bhagtani |
-| [MAPREDUCE-1163](https://issues.apache.org/jira/browse/MAPREDUCE-1163) | hdfsJniHelper.h: Yahoo! specific paths are encoded |  Trivial | . | Allen Wittenauer | Allen Wittenauer |
-| [MAPREDUCE-1147](https://issues.apache.org/jira/browse/MAPREDUCE-1147) | Map output records counter missing for map-only jobs in new API |  Blocker | . | Chris Douglas | Amar Kamat |
-| [MAPREDUCE-1070](https://issues.apache.org/jira/browse/MAPREDUCE-1070) | Deadlock in FairSchedulerServlet |  Major | . | Todd Lipcon | Todd Lipcon |
-| [MAPREDUCE-1068](https://issues.apache.org/jira/browse/MAPREDUCE-1068) | In hadoop-0.20.0 streaming job do not throw proper verbose error message if file is not present |  Major | contrib/streaming | Peeyush Bishnoi | Amareshwari Sriramadasu |
+| [HADOOP-6460](https://issues.apache.org/jira/browse/HADOOP-6460) | Namenode runs of out of memory due to memory leak in ipc Server |  Blocker | . | Suresh Srinivas | Suresh Srinivas |
+| [HADOOP-5623](https://issues.apache.org/jira/browse/HADOOP-5623) | Streaming: process provided status messages are overwritten every 10 seoncds |  Major | . | Rick Cox | Rick Cox |
+| [HADOOP-6315](https://issues.apache.org/jira/browse/HADOOP-6315) | GzipCodec should not represent BuiltInZlibInflater as decompressorType |  Major | io | Aaron Kimball | Aaron Kimball |
+| [HDFS-187](https://issues.apache.org/jira/browse/HDFS-187) | TestStartup fails if hdfs is running in the same machine |  Major | test | Tsz Wo Nicholas Sze | Todd Lipcon |
+| [HDFS-745](https://issues.apache.org/jira/browse/HDFS-745) | TestFsck timeout on 0.20. |  Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
+| [MAPREDUCE-433](https://issues.apache.org/jira/browse/MAPREDUCE-433) | TestReduceFetch failed. |  Major | . | Tsz Wo Nicholas Sze | Chris Douglas |
+| [HADOOP-6498](https://issues.apache.org/jira/browse/HADOOP-6498) | IPC client  bug may cause rpc call hang |  Blocker | ipc | Ruyue Ma | Ruyue Ma |
+| [HDFS-872](https://issues.apache.org/jira/browse/HDFS-872) | DFSClient 0.20.1 is incompatible with HDFS 0.20.2 |  Major | datanode, hdfs-client | Bassam Tabbara | Todd Lipcon |
 | [MAPREDUCE-1010](https://issues.apache.org/jira/browse/MAPREDUCE-1010) | Adding tests for changes in archives. |  Minor | harchive | Mahadev konar | Mahadev konar |
-| [MAPREDUCE-979](https://issues.apache.org/jira/browse/MAPREDUCE-979) | JobConf.getMemoryFor{Map\|Reduce}Task doesn't fallback to newer config knobs when mapred.taskmaxvmem is set to DISABLED\_MEMORY\_LIMIT of -1 |  Blocker | jobtracker, tasktracker | Arun C Murthy | Sreekanth Ramakrishnan |
-| [MAPREDUCE-826](https://issues.apache.org/jira/browse/MAPREDUCE-826) | harchive doesn't use ToolRunner / harchive returns 0 even if the job fails with exception |  Trivial | harchive | Koji Noguchi | Koji Noguchi |
+| [HADOOP-6506](https://issues.apache.org/jira/browse/HADOOP-6506) | Failing tests prevent the rest of test targets from execution. |  Major | build | Konstantin Boudnik | Konstantin Boudnik |
+| [HADOOP-6524](https://issues.apache.org/jira/browse/HADOOP-6524) | Contrib tests are failing Clover'ed build |  Major | build | Konstantin Boudnik | Konstantin Boudnik |
+| [HDFS-927](https://issues.apache.org/jira/browse/HDFS-927) | DFSInputStream retries too many times for new block locations |  Critical | hdfs-client | Todd Lipcon | Todd Lipcon |
+| [HADOOP-5611](https://issues.apache.org/jira/browse/HADOOP-5611) | C++ libraries do not build on Debian Lenny |  Critical | . | Todd Lipcon | Todd Lipcon |
+| [MAPREDUCE-1251](https://issues.apache.org/jira/browse/MAPREDUCE-1251) | c++ utils doesn't compile |  Major | . | Eli Collins | Eli Collins |
+| [HADOOP-5612](https://issues.apache.org/jira/browse/HADOOP-5612) | Some c++ scripts are not chmodded before ant execution |  Major | build | Todd Lipcon | Todd Lipcon |
+| [HADOOP-6575](https://issues.apache.org/jira/browse/HADOOP-6575) | Tests do not run on 0.20 branch |  Major | . | Chris Douglas | Chris Douglas |
+| [HADOOP-6576](https://issues.apache.org/jira/browse/HADOOP-6576) | TestStreamingStatus is failing on 0.20 branch |  Major | . | Chris Douglas | Todd Lipcon |
 | [MAPREDUCE-617](https://issues.apache.org/jira/browse/MAPREDUCE-617) | Streaming should not throw java.lang.RuntimeException and ERROR while displaying help |  Minor | contrib/streaming | Karam Singh |  |
-| [MAPREDUCE-433](https://issues.apache.org/jira/browse/MAPREDUCE-433) | TestReduceFetch failed. |  Major | . | Tsz Wo Nicholas Sze | Chris Douglas |
-| [MAPREDUCE-112](https://issues.apache.org/jira/browse/MAPREDUCE-112) | Reduce Input Records and Reduce Output Records counters are not being set when using the new Mapreduce reducer API |  Blocker | . | Jothi Padmanabhan | Jothi Padmanabhan |
 
 
 ### TESTS:
@@ -101,15 +89,3 @@
 | [HDFS-907](https://issues.apache.org/jira/browse/HDFS-907) | Add  tests for getBlockLocations and totalLoad metrics. |  Minor | namenode | Ravi Phulari | Ravi Phulari |
 
 
-### SUB-TASKS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### OTHER:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org