You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by tg...@apache.org on 2013/12/03 05:11:22 UTC

svn commit: r1547276 [2/3] - in /hadoop/common/branches/branch-0.23.10/hadoop-common-project: hadoop-annotations/pom.xml hadoop-auth-examples/pom.xml hadoop-auth/pom.xml hadoop-common/pom.xml hadoop-common/src/main/docs/releasenotes.html pom.xml

Modified: hadoop/common/branches/branch-0.23.10/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.23.10/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html?rev=1547276&r1=1547275&r2=1547276&view=diff
==============================================================================
--- hadoop/common/branches/branch-0.23.10/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html (original)
+++ hadoop/common/branches/branch-0.23.10/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html Tue Dec  3 04:11:20 2013
@@ -1,7649 +1,330 @@
-<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
-<html>
-<head>
 <META http-equiv="Content-Type" content="text/html; charset=UTF-8">
-<title>Hadoop 0.23.1 Release Notes</title>
+<title>Hadoop  0.23.10 Release Notes</title>
 <STYLE type="text/css">
-		H1 {font-family: sans-serif}
-		H2 {font-family: sans-serif; margin-left: 7mm}
-		TABLE {margin-left: 7mm}
-	</STYLE>
+	H1 {font-family: sans-serif}
+	H2 {font-family: sans-serif; margin-left: 7mm}
+	TABLE {margin-left: 7mm}
+</STYLE>
 </head>
 <body>
-<h1>Hadoop 0.23.1 Release Notes</h1>
-		These release notes include new developer and user-facing incompatibilities, features, and major improvements. 
-
+<h1>Hadoop  0.23.10 Release Notes</h1>
+These release notes include new developer and user-facing incompatibilities, features, and major improvements. 
 <a name="changes"/>
-<h2>Changes since Hadoop 0.23.0</h2>
-
-<h3>Jiras with Release Notes (describe major or incompatible changes)</h3>
+<h2>Changes since Hadoop 0.23.9</h2>
 <ul>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7348">HADOOP-7348</a>.
-     Major improvement reported by xiexianshan and fixed by xiexianshan (fs)<br>
-     <b>Modify the option of FsShell getmerge from [addnl] to [-nl] for more comprehensive</b><br>
-     <blockquote>                                              The &#39;fs -getmerge&#39; tool now uses a -nl flag to determine if adding a newline at end of each file is required, in favor of the &#39;addnl&#39; boolean flag that was used earlier.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7802">HADOOP-7802</a>.
-     Major bug reported by bmahe and fixed by bmahe <br>
-     <b>Hadoop scripts unconditionally source &quot;$bin&quot;/../libexec/hadoop-config.sh.</b><br>
-     <blockquote>                    Here is a patch to enable this behavior
<br/>
-
-
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7963">HADOOP-7963</a>.
-     Blocker bug reported by tgraves and fixed by sseth <br>
-     <b>test failures: TestViewFileSystemWithAuthorityLocalFileSystem and TestViewFileSystemLocalFileSystem</b><br>
-     <blockquote>                                              Fix ViewFS to catch a null canonical service-name and pass tests TestViewFileSystem*
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7986">HADOOP-7986</a>.
-     Major bug reported by mahadev and fixed by mahadev <br>
-     <b>Add config for History Server protocol in hadoop-policy for service level authorization.</b><br>
-     <blockquote>                                              Adding config for MapReduce History Server protocol in hadoop-policy.xml for service level authorization.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-1314">HDFS-1314</a>.
-     Minor bug reported by karims and fixed by sho.shimauchi <br>
-     <b>dfs.blocksize accepts only absolute value</b><br>
-     <blockquote>                                              The default blocksize property &#39;dfs.blocksize&#39; now accepts unit symbols to be used instead of byte length. Values such as &quot;10k&quot;, &quot;128m&quot;, &quot;1g&quot; are now OK to provide instead of just no. of bytes as was before.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2129">HDFS-2129</a>.
-     Major sub-task reported by tlipcon and fixed by tlipcon (hdfs client, performance)<br>
-     <b>Simplify BlockReader to not inherit from FSInputChecker</b><br>
-     <blockquote>                                              BlockReader has been reimplemented to use direct byte buffers. If you use a custom socket factory, it must generate sockets that have associated Channels.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2130">HDFS-2130</a>.
-     Major sub-task reported by tlipcon and fixed by tlipcon (hdfs client)<br>
-     <b>Switch default checksum to CRC32C</b><br>
-     <blockquote>                                              The default checksum algorithm used on HDFS is now CRC32C. Data from previous versions of Hadoop can still be read backwards-compatibly.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2246">HDFS-2246</a>.
-     Major improvement reported by sanjay.radia and fixed by jnp <br>
-     <b>Shortcut a local client reads to a Datanodes files directly</b><br>
-     <blockquote>                    1. New configurations
<br/>
-
-a. dfs.block.local-path-access.user is the key in datanode configuration to specify the user allowed to do short circuit read.
<br/>
-
-b. dfs.client.read.shortcircuit is the key to enable short circuit read at the client side configuration.
<br/>
-
-c. dfs.client.read.shortcircuit.skip.checksum is the key to bypass checksum check at the client side.
<br/>
-
-2. By default none of the above are enabled and short circuit read will not kick in.
<br/>
-
-3. If security is on, the feature can be used only for user that has kerberos credentials at the client, therefore map reduce tasks cannot benefit from it in general.
<br/>
-
-
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2316">HDFS-2316</a>.
-     Major new feature reported by szetszwo and fixed by szetszwo <br>
-     <b>[umbrella] WebHDFS: a complete FileSystem implementation for accessing HDFS over HTTP</b><br>
-     <blockquote>                    Provide WebHDFS as a complete FileSystem implementation for accessing HDFS over HTTP.
<br/>
-
-Previous hftp feature was a read-only FileSystem and does not provide &quot;write&quot; accesses.
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-778">MAPREDUCE-778</a>.
-     Major new feature reported by hong.tang and fixed by amar_kamat (tools/rumen)<br>
-     <b>[Rumen] Need a standalone JobHistory log anonymizer</b><br>
-     <blockquote>                                              Added an anonymizer tool to Rumen. Anonymizer takes a Rumen trace file and/or topology as input. It supports persistence and plugins to override the default behavior.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2733">MAPREDUCE-2733</a>.
-     Major task reported by vinaythota and fixed by vinaythota <br>
-     <b>Gridmix v3 cpu emulation system tests.</b><br>
-     <blockquote>                                              Adds system tests for the CPU emulation feature in Gridmix3.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2765">MAPREDUCE-2765</a>.
-     Major new feature reported by mithun and fixed by mithun (distcp, mrv2)<br>
-     <b>DistCp Rewrite</b><br>
-     <blockquote>                                              DistCpV2 added to hadoop-tools.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2784">MAPREDUCE-2784</a>.
-     Major bug reported by amar_kamat and fixed by amar_kamat (contrib/gridmix)<br>
-     <b>[Gridmix] TestGridmixSummary fails with NPE when run in DEBUG mode.</b><br>
-     <blockquote>                                              Fixed bugs in ExecutionSummarizer and ResourceUsageMatcher.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2863">MAPREDUCE-2863</a>.
-     Blocker improvement reported by acmurthy and fixed by tgraves (mrv2, nodemanager, resourcemanager)<br>
-     <b>Support web-services for RM &amp; NM</b><br>
-     <blockquote>                                              Support for web-services in YARN and MR components.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2950">MAPREDUCE-2950</a>.
-     Major bug reported by amar_kamat and fixed by ravidotg (contrib/gridmix)<br>
-     <b>[Gridmix] TestUserResolve fails in trunk</b><br>
-     <blockquote>                                              Fixes bug in TestUserResolve.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3102">MAPREDUCE-3102</a>.
-     Major sub-task reported by vinodkv and fixed by hitesh (mrv2, security)<br>
-     <b>NodeManager should fail fast with wrong configuration or permissions for LinuxContainerExecutor</b><br>
-     <blockquote>                                              Changed NodeManager to fail fast when LinuxContainerExecutor has wrong configuration or permissions.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3215">MAPREDUCE-3215</a>.
-     Minor sub-task reported by hitesh and fixed by hitesh (mrv2)<br>
-     <b>org.apache.hadoop.mapreduce.TestNoJobSetupCleanup failing on trunk</b><br>
-     <blockquote>                    Reneabled and fixed bugs in the failing test TestNoJobSetupCleanup.
<br/>
-
-
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3217">MAPREDUCE-3217</a>.
-     Minor sub-task reported by hitesh and fixed by devaraj.k (mrv2, test)<br>
-     <b>ant test TestAuditLogger fails on trunk</b><br>
-     <blockquote>                    Reenabled and fixed bugs in the failing ant test TestAuditLogger.
<br/>
-
-
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3219">MAPREDUCE-3219</a>.
-     Minor sub-task reported by hitesh and fixed by hitesh (mrv2, test)<br>
-     <b>ant test TestDelegationToken failing on trunk</b><br>
-     <blockquote>                    Reenabled and fixed bugs in the failing test TestDelegationToken.
<br/>
-
-
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3221">MAPREDUCE-3221</a>.
-     Minor sub-task reported by hitesh and fixed by devaraj.k (mrv2, test)<br>
-     <b>ant test TestSubmitJob failing on trunk</b><br>
-     <blockquote>                                              Fixed a bug in TestSubmitJob.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3280">MAPREDUCE-3280</a>.
-     Major bug reported by vinodkv and fixed by vinodkv (applicationmaster, mrv2)<br>
-     <b>MR AM should not read the username from configuration</b><br>
-     <blockquote>                                              Removed the unnecessary job user-name configuration in mapred-site.xml.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3297">MAPREDUCE-3297</a>.
-     Major task reported by sseth and fixed by sseth (mrv2)<br>
-     <b>Move Log Related components from yarn-server-nodemanager to yarn-common</b><br>
-     <blockquote>                    Moved log related components into yarn-common so that HistoryServer and clients can use them without depending on the yarn-server-nodemanager module.
<br/>
-
-
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3299">MAPREDUCE-3299</a>.
-     Minor improvement reported by sseth and fixed by jeagles (mrv2)<br>
-     <b>Add AMInfo table to the AM job page</b><br>
-     <blockquote>                                              Added AMInfo table to the MR AM job pages to list all the job-attempts when AM restarts and recovers.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3312">MAPREDUCE-3312</a>.
-     Major bug reported by revans2 and fixed by revans2 (mrv2)<br>
-     <b>Make MR AM not send a stopContainer w/o corresponding start container</b><br>
-     <blockquote>                                              Modified MR AM to not send a stop-container request for a container that isn&#39;t launched at all.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3325">MAPREDUCE-3325</a>.
-     Major improvement reported by tgraves and fixed by tgraves (mrv2)<br>
-     <b>Improvements to CapacityScheduler doc</b><br>
-     <blockquote>                                              document changes only.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3333">MAPREDUCE-3333</a>.
-     Blocker bug reported by vinodkv and fixed by vinodkv (applicationmaster, mrv2)<br>
-     <b>MR AM for sort-job going out of memory</b><br>
-     <blockquote>                                              Fixed bugs in ContainerLauncher of MR AppMaster due to which per-container connections to NodeManager were lingering long enough to hit the ulimits on number of processes.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3339">MAPREDUCE-3339</a>.
-     Blocker bug reported by ramgopalnaali and fixed by sseth (mrv2)<br>
-     <b>Job is getting hanged indefinitely,if the child processes are killed on the NM.  KILL_CONTAINER eventtype is continuosly sent to the containers that are not existing</b><br>
-     <blockquote>                                              Fixed MR AM to stop considering node blacklisting after the number of nodes blacklisted crosses a threshold.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3342">MAPREDUCE-3342</a>.
-     Critical bug reported by tgraves and fixed by jeagles (jobhistoryserver, mrv2)<br>
-     <b>JobHistoryServer doesn&apos;t show job queue</b><br>
-     <blockquote>                                              Fixed JobHistoryServer to also show the job&#39;s queue name.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3345">MAPREDUCE-3345</a>.
-     Major bug reported by vinodkv and fixed by hitesh (mrv2, resourcemanager)<br>
-     <b>Race condition in ResourceManager causing TestContainerManagerSecurity to fail sometimes</b><br>
-     <blockquote>                                              Fixed a race condition in ResourceManager that was causing TestContainerManagerSecurity to fail sometimes.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3349">MAPREDUCE-3349</a>.
-     Blocker bug reported by vinodkv and fixed by amar_kamat (mrv2)<br>
-     <b>No rack-name logged in JobHistory for unsuccessful tasks</b><br>
-     <blockquote>                                              Unsuccessful tasks now log hostname and rackname to job history. 
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3355">MAPREDUCE-3355</a>.
-     Blocker bug reported by vinodkv and fixed by vinodkv (applicationmaster, mrv2)<br>
-     <b>AM scheduling hangs frequently with sort job on 350 nodes</b><br>
-     <blockquote>                                              Fixed MR AM&#39;s ContainerLauncher to handle node-command timeouts correctly.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3360">MAPREDUCE-3360</a>.
-     Critical improvement reported by kam_iitkgp and fixed by kamesh (mrv2)<br>
-     <b>Provide information about lost nodes in the UI.</b><br>
-     <blockquote>                                              Added information about lost/rebooted/decommissioned nodes on the webapps.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3368">MAPREDUCE-3368</a>.
-     Critical bug reported by rramya and fixed by hitesh (build, mrv2)<br>
-     <b>compile-mapred-test fails</b><br>
-     <blockquote>                                              Fixed ant test compilation.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3375">MAPREDUCE-3375</a>.
-     Major task reported by vinaythota and fixed by vinaythota <br>
-     <b>Memory Emulation system tests.</b><br>
-     <blockquote>                                              Added system tests to test the memory emulation feature in Gridmix.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3379">MAPREDUCE-3379</a>.
-     Major bug reported by sseth and fixed by sseth (mrv2, nodemanager)<br>
-     <b>LocalResourceTracker should not tracking deleted cache entries</b><br>
-     <blockquote>                                              Fixed LocalResourceTracker in NodeManager to remove deleted cache entries correctly.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3382">MAPREDUCE-3382</a>.
-     Critical bug reported by vinodkv and fixed by raviprak (applicationmaster, mrv2)<br>
-     <b>Network ACLs can prevent AMs to ping the Job-end notification URL</b><br>
-     <blockquote>                                              Enhanced MR AM to use a proxy to ping the job-end notification URL.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3387">MAPREDUCE-3387</a>.
-     Critical bug reported by revans2 and fixed by revans2 (mrv2)<br>
-     <b>A tracking URL of N/A before the app master is launched breaks oozie</b><br>
-     <blockquote>                                              Fixed AM&#39;s tracking URL to always go through the proxy, even before the job started, so that it works properly with oozie throughout the job execution.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3392">MAPREDUCE-3392</a>.
-     Blocker sub-task reported by johnvijoe and fixed by johnvijoe <br>
-     <b>Cluster.getDelegationToken() throws NPE if client.getDelegationToken() returns null.</b><br>
-     <blockquote>                                              Fixed Cluster&#39;s getDelegationToken&#39;s API to return null when there isn&#39;t a supported token.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3398">MAPREDUCE-3398</a>.
-     Blocker bug reported by sseth and fixed by sseth (mrv2, nodemanager)<br>
-     <b>Log Aggregation broken in Secure Mode</b><br>
-     <blockquote>                                              Fixed log aggregation to work correctly in secure mode. Contributed by Siddharth Seth.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3399">MAPREDUCE-3399</a>.
-     Blocker sub-task reported by sseth and fixed by sseth (mrv2, nodemanager)<br>
-     <b>ContainerLocalizer should request new resources after completing the current one</b><br>
-     <blockquote>                                              Modified ContainerLocalizer to send a heartbeat to NM immediately after downloading a resource instead of always waiting for a second.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3404">MAPREDUCE-3404</a>.
-     Critical bug reported by patwhitey2007 and fixed by eepayne (job submission, mrv2)<br>
-     <b>Speculative Execution: speculative map tasks launched even if -Dmapreduce.map.speculative=false</b><br>
-     <blockquote>                                              Corrected MR AM to honor speculative configuration and enable speculating either maps or reduces.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3407">MAPREDUCE-3407</a>.
-     Minor bug reported by hitesh and fixed by hitesh (mrv2)<br>
-     <b>Wrong jar getting used in TestMR*Jobs* for MiniMRYarnCluster</b><br>
-     <blockquote>                                              Fixed pom files to refer to the correct MR app-jar needed by the integration tests.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3412">MAPREDUCE-3412</a>.
-     Major bug reported by amar_kamat and fixed by amar_kamat <br>
-     <b>&apos;ant docs&apos; is broken</b><br>
-     <blockquote>                                              Fixes &#39;ant docs&#39; by removing stale references to capacity-scheduler docs.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3417">MAPREDUCE-3417</a>.
-     Blocker bug reported by tgraves and fixed by jeagles (mrv2)<br>
-     <b>job access controls not working app master and job history UI&apos;s</b><br>
-     <blockquote>                                              Fixed job-access-controls to work with MR AM and JobHistoryServer web-apps.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3426">MAPREDUCE-3426</a>.
-     Blocker sub-task reported by hitesh and fixed by hitesh (mrv2)<br>
-     <b>uber-jobs tried to write outputs into wrong dir</b><br>
-     <blockquote>                                              Fixed MR AM in uber mode to write map intermediate outputs in the correct directory to work properly in secure mode.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3462">MAPREDUCE-3462</a>.
-     Blocker bug reported by amar_kamat and fixed by raviprak (mrv2, test)<br>
-     <b>Job submission failing in JUnit tests</b><br>
-     <blockquote>                                              Fixed failing JUnit tests in Gridmix.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3481">MAPREDUCE-3481</a>.
-     Major improvement reported by amar_kamat and fixed by amar_kamat (contrib/gridmix)<br>
-     <b>[Gridmix] Improve STRESS mode locking</b><br>
-     <blockquote>                                              Modified Gridmix STRESS mode locking structure. The submitted thread and the polling thread now run simultaneously without blocking each other. 
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3484">MAPREDUCE-3484</a>.
-     Major bug reported by raviprak and fixed by raviprak (mr-am, mrv2)<br>
-     <b>JobEndNotifier is getting interrupted before completing all its retries.</b><br>
-     <blockquote>                                              Fixed JobEndNotifier to not get interrupted before completing all its retries.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3487">MAPREDUCE-3487</a>.
-     Critical bug reported by tgraves and fixed by jlowe (mrv2)<br>
-     <b>jobhistory web ui task counters no longer links to singletakecounter page</b><br>
-     <blockquote>                                              Fixed JobHistory web-UI to display links to single task&#39;s counters&#39; page.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3490">MAPREDUCE-3490</a>.
-     Blocker bug reported by sseth and fixed by sharadag (mr-am, mrv2)<br>
-     <b>RMContainerAllocator counts failed maps towards Reduce ramp up</b><br>
-     <blockquote>                                              Fixed MapReduce AM to count failed maps also towards Reduce ramp up.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3511">MAPREDUCE-3511</a>.
-     Blocker sub-task reported by sseth and fixed by vinodkv (mr-am, mrv2)<br>
-     <b>Counters occupy a good part of AM heap</b><br>
-     <blockquote>                                              Removed a multitude of cloned/duplicate counters in the AM thereby reducing the AM heap size and preventing full GCs.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3512">MAPREDUCE-3512</a>.
-     Blocker sub-task reported by sseth and fixed by sseth (mr-am, mrv2)<br>
-     <b>Batch jobHistory disk flushes</b><br>
-     <blockquote>                                              Batching JobHistory flushing to DFS so that we don&#39;t flush for every event slowing down AM.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3519">MAPREDUCE-3519</a>.
-     Blocker sub-task reported by ravidotg and fixed by ravidotg (mrv2, nodemanager)<br>
-     <b>Deadlock in LocalDirsHandlerService and ShuffleHandler</b><br>
-     <blockquote>                                              Fixed a deadlock in NodeManager LocalDirectories&#39;s handling service.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3528">MAPREDUCE-3528</a>.
-     Major bug reported by sseth and fixed by sseth (mr-am, mrv2)<br>
-     <b>The task timeout check interval should be configurable independent of mapreduce.task.timeout</b><br>
-     <blockquote>                                              Fixed TaskHeartBeatHandler to use a new configuration for the thread loop interval separate from task-timeout configuration property.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3530">MAPREDUCE-3530</a>.
-     Blocker bug reported by karams and fixed by acmurthy (mrv2, resourcemanager, scheduler)<br>
-     <b>Sometimes NODE_UPDATE to the scheduler throws an NPE causing the scheduling to stop</b><br>
-     <blockquote>                                              Fixed an NPE occuring during scheduling in the ResourceManager.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3532">MAPREDUCE-3532</a>.
-     Critical bug reported by karams and fixed by kamesh (mrv2, nodemanager)<br>
-     <b>When 0 is provided as port number in yarn.nodemanager.webapp.address, NMs webserver component picks up random port, NM keeps on Reporting 0 port to RM</b><br>
-     <blockquote>                                              Modified NM to report correct http address when an ephemeral web port is configured.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3549">MAPREDUCE-3549</a>.
-     Blocker bug reported by tgraves and fixed by tgraves (mrv2)<br>
-     <b>write api documentation for web service apis for RM, NM, mapreduce app master, and job history server</b><br>
-     <blockquote>                    new files added: A      hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WebServicesIntro.apt.vm
<br/>
-
-A      hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRest.apt.vm
<br/>
-
-A      hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRest.apt.vm
<br/>
-
-A      hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/MapredAppMasterRest.apt.vm
<br/>
-
-A      hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/HistoryServerRest.apt.vm
<br/>
-
-
<br/>
-
-The hadoop-project/src/site/site.xml is split into separate patch.
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3564">MAPREDUCE-3564</a>.
-     Blocker bug reported by mahadev and fixed by sseth (mrv2)<br>
-     <b>TestStagingCleanup and TestJobEndNotifier are failing on trunk.</b><br>
-     <blockquote>                                              Fixed failures in TestStagingCleanup and TestJobEndNotifier tests.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3568">MAPREDUCE-3568</a>.
-     Critical sub-task reported by vinodkv and fixed by vinodkv (mr-am, mrv2, performance)<br>
-     <b>Optimize Job&apos;s progress calculations in MR AM</b><br>
-     <blockquote>                                              Optimized Job&#39;s progress calculations in MR AM.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3586">MAPREDUCE-3586</a>.
-     Blocker bug reported by vinodkv and fixed by vinodkv (mr-am, mrv2)<br>
-     <b>Lots of AMs hanging around in PIG testing</b><br>
-     <blockquote>                                              Modified CompositeService to avoid duplicate stop operations thereby solving race conditions in MR AM shutdown.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3597">MAPREDUCE-3597</a>.
-     Major improvement reported by ravidotg and fixed by ravidotg (tools/rumen)<br>
-     <b>Provide a way to access other info of history file from Rumentool</b><br>
-     <blockquote>                                              Rumen now provides {{Parsed*}} objects. These objects provide extra information that are not provided by {{Logged*}} objects.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3618">MAPREDUCE-3618</a>.
-     Major sub-task reported by sseth and fixed by sseth (mrv2, performance)<br>
-     <b>TaskHeartbeatHandler holds a global lock for all task-updates</b><br>
-     <blockquote>                                              Fixed TaskHeartbeatHandler to not hold a global lock for all task-updates.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3630">MAPREDUCE-3630</a>.
-     Critical task reported by amolkekre and fixed by mahadev (mrv2)<br>
-     <b>NullPointerException running teragen</b><br>
-     <blockquote>                                              Committed to trunk and branch-0.23. Thanks Mahadev.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3639">MAPREDUCE-3639</a>.
-     Blocker bug reported by sseth and fixed by sseth (mrv2)<br>
-     <b>TokenCache likely broken for FileSystems which don&apos;t issue delegation tokens</b><br>
-     <blockquote>                                              Fixed TokenCache to work with absent FileSystem canonical service-names.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3641">MAPREDUCE-3641</a>.
-     Blocker sub-task reported by acmurthy and fixed by acmurthy (mrv2, scheduler)<br>
-     <b>CapacityScheduler should be more conservative assigning off-switch requests</b><br>
-     <blockquote>                                              Making CapacityScheduler more conservative so as to assign only one off-switch container in a single scheduling iteration.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3656">MAPREDUCE-3656</a>.
-     Blocker bug reported by karams and fixed by sseth (applicationmaster, mrv2, resourcemanager)<br>
-     <b>Sort job on 350 scale is consistently failing with latest MRV2 code </b><br>
-     <blockquote>                                              Fixed a race condition in MR AM which is failing the sort benchmark consistently.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3699">MAPREDUCE-3699</a>.
-     Major bug reported by vinodkv and fixed by hitesh (mrv2)<br>
-     <b>Default RPC handlers are very low for YARN servers</b><br>
-     <blockquote>                                              Increased RPC handlers for all YARN servers to reasonable values for working at scale.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3703">MAPREDUCE-3703</a>.
-     Critical bug reported by eepayne and fixed by eepayne (mrv2, resourcemanager)<br>
-     <b>ResourceManager should provide node lists in JMX output</b><br>
-     <blockquote>                    New JMX Bean in ResourceManager to provide list of live node managers:
<br/>
-
-
<br/>
-
-Hadoop:service=ResourceManager,name=RMNMInfo LiveNodeManagers
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3710">MAPREDUCE-3710</a>.
-     Major bug reported by sseth and fixed by sseth (mrv1, mrv2)<br>
-     <b>last split generated by FileInputFormat.getSplits may not have the best locality</b><br>
-     <blockquote>                                              Improved FileInputFormat to return better locality for the last split.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3711">MAPREDUCE-3711</a>.
-     Blocker sub-task reported by sseth and fixed by revans2 (mrv2)<br>
-     <b>AppMaster recovery for Medium to large jobs take long time</b><br>
-     <blockquote>                                              Fixed MR AM recovery so that only single selected task output is recovered and thus reduce the unnecessarily bloated recovery time.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3713">MAPREDUCE-3713</a>.
-     Blocker bug reported by sseth and fixed by acmurthy (mrv2, resourcemanager)<br>
-     <b>Incorrect headroom reported to jobs</b><br>
-     <blockquote>                                              Fixed the way head-room is allocated to applications by CapacityScheduler so that it deducts current-usage per user and not per-application.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3714">MAPREDUCE-3714</a>.
-     Blocker bug reported by vinodkv and fixed by vinodkv (mrv2, task)<br>
-     <b>Reduce hangs in a corner case</b><br>
-     <blockquote>                                              Fixed EventFetcher and Fetcher threads to shut-down properly so that reducers don&#39;t hang in corner cases.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3716">MAPREDUCE-3716</a>.
-     Blocker bug reported by jeagles and fixed by jeagles (mrv2)<br>
-     <b>java.io.File.createTempFile fails in map/reduce tasks</b><br>
-     <blockquote>                                              Fixing YARN+MR to allow MR jobs to be able to use java.io.File.createTempFile to create temporary files as part of their tasks.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3720">MAPREDUCE-3720</a>.
-     Major bug reported by vinodkv and fixed by vinodkv (client, mrv2)<br>
-     <b>Command line listJobs should not visit each AM</b><br>
-     <blockquote>                    Changed bin/mapred job -list to not print job-specific information not available at RM.
<br/>
-
-
<br/>
-
-Very minor incompatibility in cmd-line output, inevitable due to MRv2 architecture.
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3732">MAPREDUCE-3732</a>.
-     Blocker bug reported by acmurthy and fixed by acmurthy (mrv2, resourcemanager, scheduler)<br>
-     <b>CS should only use &apos;activeUsers with pending requests&apos; for computing user-limits</b><br>
-     <blockquote>                                              Modified CapacityScheduler to use only users with pending requests for computing user-limits.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3752">MAPREDUCE-3752</a>.
-     Blocker bug reported by acmurthy and fixed by acmurthy (mrv2)<br>
-     <b>Headroom should be capped by queue max-cap</b><br>
-     <blockquote>                                              Modified application limits to include queue max-capacities besides the usual user limits.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3754">MAPREDUCE-3754</a>.
-     Major bug reported by vinodkv and fixed by vinodkv (mrv2, webapps)<br>
-     <b>RM webapp should have pages filtered based on App-state</b><br>
-     <blockquote>                                              Modified RM UI to filter applications based on state of the applications.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3760">MAPREDUCE-3760</a>.
-     Major bug reported by rramya and fixed by vinodkv (mrv2)<br>
-     <b>Blacklisted NMs should not appear in Active nodes list</b><br>
-     <blockquote>                                              Changed active nodes list to not contain unhealthy nodes on the webUI and metrics.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3774">MAPREDUCE-3774</a>.
-     Major bug reported by mahadev and fixed by mahadev (mrv2)<br>
-     <b>yarn-default.xml should be moved to hadoop-yarn-common.</b><br>
-     <blockquote>      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3784">MAPREDUCE-3784</a>.
-     Major bug reported by rramya and fixed by acmurthy (mrv2)<br>
-     <b>maxActiveApplications(|PerUser) per queue is too low for small clusters</b><br>
-     <blockquote>                                              Fixed CapacityScheduler so that maxActiveApplication and maxActiveApplicationsPerUser per queue are not too low for small clusters. 
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3804">MAPREDUCE-3804</a>.
-     Major bug reported by davet and fixed by davet (jobhistoryserver, mrv2, resourcemanager)<br>
-     <b>yarn webapp interface vulnerable to cross scripting attacks</b><br>
-     <blockquote>                                              fix cross scripting attacks vulnerability through webapp interface.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3808">MAPREDUCE-3808</a>.
-     Blocker bug reported by sseth and fixed by revans2 (mrv2)<br>
-     <b>NPE in FileOutputCommitter when running a 0 reduce job</b><br>
-     <blockquote>                                              Fixed an NPE in FileOutputCommitter for jobs with maps but no reduces.
-
-      
-</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3815">MAPREDUCE-3815</a>.
-     Critical sub-task reported by sseth and fixed by sseth (mrv2)<br>
-     <b>Data Locality suffers if the AM asks for containers using IPs instead of hostnames</b><br>
-     <blockquote>                                              Fixed MR AM to always use hostnames and never IPs when requesting containers so that scheduler can give off data local containers correctly.
-
-      
-</blockquote></li>
-
-</ul>
-
-<h3>Other Jiras (describe bug fixes and minor changes)</h3>
-<ul>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-4515">HADOOP-4515</a>.
-     Minor improvement reported by abagri and fixed by sho.shimauchi <br>
-     <b>conf.getBoolean must be case insensitive</b><br>
-     <blockquote>Currently, if xx is set to &quot;TRUE&quot;, conf.getBoolean(&quot;xx&quot;, false) would return false. <br><br>conf.getBoolean should do an equalsIgnoreCase() instead of equals()<br><br>I am marking the change as incompatible because it does change semantics as pointed by Steve in HADOOP-4416</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6490">HADOOP-6490</a>.
-     Minor bug reported by zshao and fixed by umamaheswararao (fs)<br>
-     <b>Path.normalize should use StringUtils.replace in favor of String.replace</b><br>
-     <blockquote>in our environment, we are seeing that the JobClient is going out of memory because Path.normalizePath(String) is called several tens of thousands of times, and each time it calls &quot;String.replace&quot; twice.<br><br>java.lang.String.replace compiles a regex to do the job which is very costly.<br>We should use org.apache.commons.lang.StringUtils.replace which is much faster and consumes almost no extra memory.<br></blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6614">HADOOP-6614</a>.
-     Minor improvement reported by stevel@apache.org and fixed by jmhsieh (util)<br>
-     <b>RunJar should provide more diags when it can&apos;t create a temp file</b><br>
-     <blockquote>When you see a stack trace about permissions, it is better if the trace included the file/directory at fault:<br>{code}<br>Exception in thread &quot;main&quot; java.io.IOException: Permission denied<br>	at java.io.UnixFileSystem.createFileExclusively(Native Method)<br>	at java.io.File.checkAndCreate(File.java:1704)<br>	at java.io.File.createTempFile(File.java:1792)<br>	at org.apache.hadoop.util.RunJar.main(RunJar.java:147)<br>{code}<br><br>As it is, you need to go into the code, discover that it&apos;s {{${hadoop.tmp.dir}/hadoop-unja...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6840">HADOOP-6840</a>.
-     Minor improvement reported by nspiegelberg and fixed by nspiegelberg (fs, io)<br>
-     <b>Support non-recursive create() in FileSystem &amp; SequenceFile.Writer</b><br>
-     <blockquote>The proposed solution for HBASE-2312 requires the sequence file to handle a non-recursive create.  This is already supported by HDFS, but needs to have an equivalent FileSystem &amp; SequenceFile.Writer API.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6886">HADOOP-6886</a>.
-     Minor improvement reported by nspiegelberg and fixed by nspiegelberg (fs)<br>
-     <b>LocalFileSystem Needs createNonRecursive API</b><br>
-     <blockquote>While running sanity check tests for HBASE-2312, I noticed that HDFS-617 did not include createNonRecursive() support for the LocalFileSystem.  This is a problem for HBase, which allows the user to run over the LocalFS instead of HDFS for local cluster testing.  I think this only affects 0.20-append, but may affect the trunk based upon how exactly FileContext handles non-recursive creates.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7424">HADOOP-7424</a>.
-     Major improvement reported by eli and fixed by umamaheswararao <br>
-     <b>Log an error if the topology script doesn&apos;t handle multiple args</b><br>
-     <blockquote>ScriptBasedMapping#resolve currently warns and returns null if it passes n arguments to the topology script and gets back a different number of resolutions. This indicates a bug in the topology script (or it&apos;s input) and therefore should be an error.<br><br>{code}<br>// invalid number of entries returned by the script<br>LOG.warn(&quot;Script &quot; + scriptName + &quot; returned &quot;<br>   + Integer.toString(m.size()) + &quot; values when &quot;<br>   + Integer.toString(names.size()) + &quot; were expected.&quot;);<br>return null;<br>{code}<br><br>There&apos;s on...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7470">HADOOP-7470</a>.
-     Minor improvement reported by stevel@apache.org and fixed by enis (util)<br>
-     <b>move up to Jackson 1.8.8</b><br>
-     <blockquote>I see that hadoop-core still depends on Jackson 1.0.1 -but that project is now up to 1.8.2 in releases. Upgrading will make it easier for other Jackson-using apps that are more up to date to keep their classpath consistent.<br><br>The patch would be updating the ivy file to pull in the later version; no test</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7504">HADOOP-7504</a>.
-     Trivial improvement reported by eli and fixed by qwertymaniac (metrics)<br>
-     <b>hadoop-metrics.properties missing some Ganglia31 options </b><br>
-     <blockquote>The &quot;jvm&quot;, &quot;rpc&quot;, and &quot;ugi&quot; sections of hadoop-metrics.properties should have Ganglia31 options like &quot;dfs&quot; and &quot;mapred&quot;</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7574">HADOOP-7574</a>.
-     Trivial improvement reported by xiexianshan and fixed by xiexianshan (fs)<br>
-     <b>Improvement for FSshell -stat</b><br>
-     <blockquote>Add two optional formats for FSshell -stat, one is %G for group name of owner and the other is %U for user name.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7590">HADOOP-7590</a>.
-     Major sub-task reported by tucu00 and fixed by tucu00 (build)<br>
-     <b>Mavenize streaming and MR examples</b><br>
-     <blockquote>MR1 code is still available in MR2 for testing contribs.<br><br>While this is a temporary until contribs tests are ported to MR2.<br><br>As a follow up the contrib projects themselves should be mavenized.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7657">HADOOP-7657</a>.
-     Major improvement reported by mrbsd and fixed by decster <br>
-     <b>Add support for LZ4 compression</b><br>
-     <blockquote>According to several benchmark sites, LZ4 seems to overtake other fast compression algorithms, especially in the decompression speed area. The interface is also trivial to integrate (http://code.google.com/p/lz4/source/browse/trunk/lz4.h) and there is no license issue.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7736">HADOOP-7736</a>.
-     Trivial improvement reported by qwertymaniac and fixed by qwertymaniac (fs)<br>
-     <b>Remove duplicate call of Path#normalizePath during initialization.</b><br>
-     <blockquote>Found during code reading on HADOOP-6490, there seems to be an unnecessary call of {{normalizePath(...)}} being made in the constructor {{Path(Path, Path)}}. Since {{initialize(...)}} normalizes its received path string already, its unnecessary to do it to the path parameter in the constructor&apos;s call of the same.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7758">HADOOP-7758</a>.
-     Major improvement reported by tucu00 and fixed by tucu00 (fs)<br>
-     <b>Make GlobFilter class public</b><br>
-     <blockquote>Currently the GlobFilter class is package private.<br><br>As a generic filter it is quite useful (and I&apos;ve found myself doing cut&amp;paste of it a few times)</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7761">HADOOP-7761</a>.
-     Major improvement reported by tlipcon and fixed by tlipcon (io, performance, util)<br>
-     <b>Improve performance of raw comparisons</b><br>
-     <blockquote>Guava has a nice implementation of lexicographical byte-array comparison that uses sun.misc.Unsafe to compare unsigned byte arrays long-at-a-time. Their benchmarks show it as being 2x more CPU-efficient than the equivalent pure-Java implementation. We can easily integrate this into WritableComparator.compareBytes to improve CPU performance in the shuffle.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7777">HADOOP-7777</a>.
-     Major improvement reported by stevel@apache.org and fixed by stevel@apache.org (util)<br>
-     <b>Implement a base class for DNSToSwitchMapping implementations that can offer extra topology information</b><br>
-     <blockquote>HDFS-2492 has identified a need for DNSToSwitchMapping implementations to provide a bit more topology information (e.g. whether or not there are multiple switches). This could be done by writing an extended interface, querying its methods if present and coming up with a default action if there is no extended interface. <br><br>Alternatively, we have a base class that all the standard mappings implement, with a boolean isMultiRack() method; all the standard subclasses would extend this, as could any...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7787">HADOOP-7787</a>.
-     Major bug reported by bmahe and fixed by bmahe (build)<br>
-     <b>Make source tarball use conventional name.</b><br>
-     <blockquote>When building binary and source tarballs, I get the following artifacts:<br>Binary tarball: hadoop-0.23.0-SNAPSHOT.tar.gz <br>Source tarball: hadoop-dist-0.23.0-SNAPSHOT-src.tar.gz<br><br>Notice the &quot;-dist&quot; right between &quot;hadoop&quot; and the version in the source tarball name.<br></blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7801">HADOOP-7801</a>.
-     Major bug reported by bmahe and fixed by bmahe (build)<br>
-     <b>HADOOP_PREFIX cannot be overriden</b><br>
-     <blockquote>hadoop-config.sh forces HADOOP_prefix to a specific value:<br>export HADOOP_PREFIX=`dirname &quot;$this&quot;`/..<br><br>It would be nice to make this overridable.<br></blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7804">HADOOP-7804</a>.
-     Major improvement reported by arpitgupta and fixed by arpitgupta (conf)<br>
-     <b>enable hadoop config generator to set dfs.block.local-path-access.user to enable short circuit read</b><br>
-     <blockquote>we have a new config that allows to select which user can have access for short circuit read. We should make that configurable through the config generator scripts.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7808">HADOOP-7808</a>.
-     Major new feature reported by daryn and fixed by daryn (fs, security)<br>
-     <b>Port token service changes from 205</b><br>
-     <blockquote>Need to merge the 205 token bug fixes and the feature to enable hostname-based tokens.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7810">HADOOP-7810</a>.
-     Blocker bug reported by johnvijoe and fixed by johnvijoe <br>
-     <b>move hadoop archive to core from tools</b><br>
-     <blockquote>&quot;The HadoopArchieves classes are included in the $HADOOP_HOME/hadoop_tools.jar, but this file is not found in `hadoop classpath`.<br><br>A Pig script using HCatalog&apos;s dynamic partitioning with HAR enabled will therefore fail if a jar with HAR is not included in the pig call&apos;s &apos;-cp&apos; and &apos;-Dpig.additional.jars&apos; arguments.&quot;<br><br>I am not aware of any reason to not include hadoop-tools.jar in &apos;hadoop classpath&apos;. Will attach a patch soon.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7811">HADOOP-7811</a>.
-     Major bug reported by jeagles and fixed by jeagles (security, test)<br>
-     <b>TestUserGroupInformation#testGetServerSideGroups test fails in chroot</b><br>
-     <blockquote>It is common when running in chroot to have root&apos;s group vector preserved when running as your self.<br><br>For example<br><br># Enter chroot<br>$ sudo chroot /myroot<br><br># still root<br>$ whoami<br>root<br><br># switch to user preserving root&apos;s group vector<br>$ sudo -u user -P -s<br><br># root&apos;s groups<br>$ groups root<br>a b c<br><br># user&apos;s real groups<br>$ groups user<br>d e f<br><br># user&apos;s effective groups<br>$ groups<br>a b c d e f<br>-------------------------------<br></blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7837">HADOOP-7837</a>.
-     Major bug reported by stevel@apache.org and fixed by eli (conf)<br>
-     <b>no NullAppender in the log4j config</b><br>
-     <blockquote>running sbin/start-dfs.sh gives me a telling off about no null appender -should one be in the log4j config file.<br><br>Full trace (failure expected, but full output not as expected)<br>{code}<br>./start-dfs.sh <br>log4j:ERROR Could not find value for key log4j.appender.NullAppender<br>log4j:ERROR Could not instantiate appender named &quot;NullAppender&quot;.<br>Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.<br>Starting namenodes on []<br>cat: /Users/slo/J...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7843">HADOOP-7843</a>.
-     Major bug reported by johnvijoe and fixed by johnvijoe <br>
-     <b>compilation failing because workDir not initialized in RunJar.java</b><br>
-     <blockquote>Compilation is failing on 0.23 and trunk because workDir is not initialized in RunJar.java</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7853">HADOOP-7853</a>.
-     Blocker bug reported by daryn and fixed by daryn (security)<br>
-     <b>multiple javax security configurations cause conflicts</b><br>
-     <blockquote>Both UGI and the SPNEGO KerberosAuthenticator set the global javax security configuration.  SPNEGO stomps on UGI&apos;s security config which leads to kerberos/SASL authentication errors.<br></blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7854">HADOOP-7854</a>.
-     Critical bug reported by daryn and fixed by daryn (security)<br>
-     <b>UGI getCurrentUser is not synchronized</b><br>
-     <blockquote>Sporadic {{ConcurrentModificationExceptions}} are originating from {{UGI.getCurrentUser}} when it needs to create a new instance.  The problem was specifically observed in a JT under heavy load when a post-job cleanup is accessing the UGI while a new job is being processed.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7858">HADOOP-7858</a>.
-     Trivial improvement reported by tlipcon and fixed by tlipcon <br>
-     <b>Drop some info logging to DEBUG level in IPC, metrics, and HTTP</b><br>
-     <blockquote>Our info level logs have gotten noisier and noisier over time, which is annoying both for users and when looking at unit tests. I&apos;d like to drop a few of the less useful INFO level messages down to DEBUG.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7859">HADOOP-7859</a>.
-     Major bug reported by eli and fixed by eli (fs)<br>
-     <b>TestViewFsHdfs.testgetFileLinkStatus is failing an assert</b><br>
-     <blockquote>Probably introduced by HADOOP-7783. I&apos;ll fix it.<br><br>{noformat}<br>java.lang.AssertionError<br>	at org.apache.hadoop.fs.FileContext.qualifySymlinkTarget(FileContext.java:1111)<br>	at org.apache.hadoop.fs.FileContext.access$000(FileContext.java:170)<br>	at org.apache.hadoop.fs.FileContext$15.next(FileContext.java:1142)<br>	at org.apache.hadoop.fs.FileContext$15.next(FileContext.java:1137)<br>	at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2327)<br>	at org.apache.hadoop.fs.FileContext.getF...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7864">HADOOP-7864</a>.
-     Major bug reported by abayer and fixed by abayer (build)<br>
-     <b>Building mvn site with Maven &lt; 3.0.2 causes OOM errors</b><br>
-     <blockquote>If you try to run mvn site with Maven 3.0.0 (and possibly 3.0.1 - haven&apos;t actually tested that), you get hit with unavoidable OOM errors. Switching to Maven 3.0.2 or later fixes this. The enforcer should require 3.0.2 for builds.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7870">HADOOP-7870</a>.
-     Major bug reported by jmhsieh and fixed by jmhsieh <br>
-     <b>fix SequenceFile#createWriter with boolean createParent arg to respect createParent.</b><br>
-     <blockquote>After HBASE-6840, one set of calls to createNonRecursive(...) seems fishy - the new boolean createParent variable from the signature isn&apos;t used at all.  <br><br>{code}<br>+  public static Writer<br>+    createWriter(FileSystem fs, Configuration conf, Path name,<br>+                 Class keyClass, Class valClass, int bufferSize,<br>+                 short replication, long blockSize, boolean createParent,<br>+                 CompressionType compressionType, CompressionCodec codec,<br>+                 Metadata meta...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7874">HADOOP-7874</a>.
-     Major bug reported by tucu00 and fixed by tucu00 (build)<br>
-     <b>native libs should be under lib/native/ dir</b><br>
-     <blockquote>Currently common and hdfs SO files end up under lib/ dir with all JARs, they should end up under lib/native.<br><br>In addition, the hadoop-config.sh script needs some cleanup when comes to native lib handling:<br><br>* it is using lib/native/${JAVA_PLATFORM} for the java.library.path, when it should use lib/native.<br>* it is looking for build/lib/native, this is from the old ant build, not applicable anymore.<br>* it is looking for the libhdfs.a and adding to the java.librar.path, this is not correct.<br></blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7877">HADOOP-7877</a>.
-     Major task reported by szetszwo and fixed by szetszwo (documentation)<br>
-     <b>Federation: update Balancer documentation</b><br>
-     <blockquote>Update Balancer documentation for the new balancing policy and CLI.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7878">HADOOP-7878</a>.
-     Minor bug reported by stevel@apache.org and fixed by stevel@apache.org (util)<br>
-     <b>Regression HADOOP-7777 switch changes break HDFS tests when the isSingleSwitch() predicate is used</b><br>
-     <blockquote>This doesn&apos;t show up until you apply the HDFS-2492 patch, but the attempt to make the {{StaticMapping}} topology clever by deciding if it is single rack or multi rack based on its rack-&gt;node mapping breaks the HDFS {{TestBlocksWithNotEnoughRacks}} test. Why? Because the racks go in after the switch topology is cached by the {{BlockManager}}, which assumes the system is always single-switch.<br><br>Fix: default to assuming multi-switch; remove the intelligence, add a setter for anyone who really wan...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7887">HADOOP-7887</a>.
-     Critical bug reported by tucu00 and fixed by tucu00 (security)<br>
-     <b>KerberosAuthenticatorHandler is not setting KerberosName name rules from configuration</b><br>
-     <blockquote>While the KerberosAuthenticatorHandler defines the name rules property, it does not set it in KerberosName.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7890">HADOOP-7890</a>.
-     Trivial improvement reported by knoguchi and fixed by knoguchi (scripts)<br>
-     <b>Redirect hadoop script&apos;s deprecation message to stderr</b><br>
-     <blockquote>$ hadoop dfs -ls<br>DEPRECATED: Use of this script to execute hdfs command is deprecated.<br>Instead use the hdfs command for it.<br>...<br><br>If we&apos;re still letting the command run, I think we should redirect the deprecation message to stderr in case users have a script taking the output from stdout.<br></blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7898">HADOOP-7898</a>.
-     Minor bug reported by sureshms and fixed by sureshms (security)<br>
-     <b>Fix javadoc warnings in AuthenticationToken.java</b><br>
-     <blockquote>Fix the following javadoc warning:<br>[WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationToken.java:33: warning - Tag @link: reference not found: HttpServletRequest<br>[WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationToken.java...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7902">HADOOP-7902</a>.
-     Major bug reported by szetszwo and fixed by tucu00 <br>
-     <b>skipping name rules setting (if already set) should be done on UGI initialization only </b><br>
-     <blockquote>Both TestDelegationToken and TestOfflineEditsViewer are currently failing.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7907">HADOOP-7907</a>.
-     Blocker bug reported by tucu00 and fixed by tucu00 (build)<br>
-     <b>hadoop-tools JARs are not part of the distro</b><br>
-     <blockquote>After mavenizing streaming, the hadoop-streaming JAR is not part of the final tar.<br><br></blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7910">HADOOP-7910</a>.
-     Minor improvement reported by sho.shimauchi and fixed by sho.shimauchi (conf)<br>
-     <b>add configuration methods to handle human readable size values</b><br>
-     <blockquote>It&apos;s better to have a new configuration methods which handle human readable size values.<br>For example, see HDFS-1314.<br><br></blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7912">HADOOP-7912</a>.
-     Major bug reported by revans2 and fixed by revans2 (build)<br>
-     <b>test-patch should run eclipse:eclipse to verify that it does not break again</b><br>
-     <blockquote>Recently the eclipse:eclipse build was broken.  If we are going to document this on the wiki and have many developers use it we should verify that it always works.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7914">HADOOP-7914</a>.
-     Major bug reported by szetszwo and fixed by szetszwo (build)<br>
-     <b>duplicate declaration of hadoop-hdfs test-jar</b><br>
-     <blockquote>[WARNING] Some problems were encountered while building the effective model for org.apache.hadoop:hadoop-common-project:pom:0.24.0-SNAPSHOT<br>[WARNING] &apos;dependencyManagement.dependencies.dependency.(groupId:artifactId:type:classifier)&apos; must be unique: org.apache.hadoop:hadoop-hdfs:test-jar -&gt; duplicate declaration of version ${project.version} @ org.apache.hadoop:hadoop-project:0.24.0-SNAPSHOT, /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-project/pom.xml, line 140, ...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7917">HADOOP-7917</a>.
-     Major bug reported by tucu00 and fixed by tucu00 (build)<br>
-     <b>compilation of protobuf files fails in windows/cygwin</b><br>
-     <blockquote>HADOOP-7899 &amp; HDFS-2511 introduced compilation of proto files as part of the build.<br><br>Such compilation is failing in windows/cygwin</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7919">HADOOP-7919</a>.
-     Trivial improvement reported by qwertymaniac and fixed by qwertymaniac (documentation)<br>
-     <b>[Doc] Remove hadoop.logfile.* properties.</b><br>
-     <blockquote>The following only resides in core-default.xml and doesn&apos;t look like its used anywhere at all. At least a grep of the prop name and parts of it does not give me back anything at all.<br><br>These settings are now configurable via generic Log4J opts, via the shipped log4j.properties file in the distributions.<br><br>{code}<br>137 &lt;!--- logging properties --&gt;<br>138 <br>139 &lt;property&gt;<br>140   &lt;name&gt;hadoop.logfile.size&lt;/name&gt;<br>141   &lt;value&gt;10000000&lt;/value&gt;<br>142   &lt;description&gt;The max size of each log file&lt;/description&gt;<br>...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7933">HADOOP-7933</a>.
-     Critical bug reported by sseth and fixed by sseth <br>
-     <b>Viewfs changes for MAPREDUCE-3529</b><br>
-     <blockquote>ViewFs.getDelegationTokens returns a list of tokens for the associated namenodes. Credentials serializes these tokens using the service name for the actual namenodes. Effectively, tokens are not cached for viewfs (some more details in MR 3529). Affects any job which uses the TokenCache in tasks along with viewfs (some Pig jobs).<br><br>Talk to Jitendra about this, some options<br>1. Change Credentials.getAllTokens to return the key, instead of just a token list (associate the viewfs canonical name wit...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7934">HADOOP-7934</a>.
-     Critical improvement reported by tucu00 and fixed by tucu00 (build)<br>
-     <b>Normalize dependencies versions across all modules</b><br>
-     <blockquote>Move all dependencies versions to the dependencyManagement section in the hadoop-project POM<br><br>Move all plugin versions to the dependencyManagement section in the hadoop-project POM</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7936">HADOOP-7936</a>.
-     Major bug reported by eli and fixed by tucu00 (build)<br>
-     <b>There&apos;s a Hoop README in the root dir of the tarball</b><br>
-     <blockquote>The Hoop README.txt is now in the root dir of the tarball.<br><br>{noformat}<br>hadoop-trunk1 $ tar xvzf hadoop-dist/target/hadoop-0.24.0-SNAPSHOT.tar.gz  -C /tmp/<br>..<br>hadoop-trunk1 $ head -n3 /tmp/hadoop-0.24.0-SNAPSHOT/README.txt <br>-----------------------------------------------------------------------------<br>HttpFS - Hadoop HDFS over HTTP<br>{noformat}</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7939">HADOOP-7939</a>.
-     Major improvement reported by rvs and fixed by rvs (build, conf, documentation, scripts)<br>
-     <b>Improve Hadoop subcomponent integration in Hadoop 0.23</b><br>
-     <blockquote>h1. Introduction<br><br>For the rest of this proposal it is assumed that the current set<br>of Hadoop subcomponents is:<br> * hadoop-common<br> * hadoop-hdfs<br> * hadoop-yarn<br> * hadoop-mapreduce<br><br>It must be noted that this is an open ended list, though. For example,<br>implementations of additional frameworks on top of yarn (e.g. MPI) would<br>also be considered a subcomponent.<br><br>h1. Problem statement<br><br>Currently there&apos;s an unfortunate coupling and hard-coding present at the<br>level of launcher scripts, configuration s...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7948">HADOOP-7948</a>.
-     Minor bug reported by cim_michajlomatijkiw and fixed by cim_michajlomatijkiw (build)<br>
-     <b>Shell scripts created by hadoop-dist/pom.xml to build tar do not properly propagate failure</b><br>
-     <blockquote>The run() function, as defined in dist-layout-stitching.sh and dist-tar-stitching, created in hadoop-dist/pom.xml, does not properly propagate the error code of a failing command.  See the following:<br>{code}<br>    ...<br>    &quot;${@}&quot;                 # call fails with non-zero exit code<br>    if [ $? != 0 ]; then   <br>        echo               <br>        echo &quot;Failed!&quot;     <br>        echo               <br>        exit $?            # $?=result of echo above, likely 0, thus exit with code 0<br>    ...<br>{code}</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7949">HADOOP-7949</a>.
-     Trivial bug reported by eli and fixed by eli (ipc)<br>
-     <b>Updated maxIdleTime default in the code to match core-default.xml</b><br>
-     <blockquote>HADOOP-2909 intended to set the server max idle time for a connection to twice the client value. (&quot;The server-side max idle time should be greater than the client-side max idle time, for example, twice of the client-side max idle time.&quot;) This way when a server times out a connection it&apos;s due a crashed client and not an inactive client so we don&apos;t close client connections with outstanding requests (by setting 2x the client value on the server side the client should time out the connection firs...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7964">HADOOP-7964</a>.
-     Blocker bug reported by kihwal and fixed by daryn (security, util)<br>
-     <b>Deadlock in class init.</b><br>
-     <blockquote>After HADOOP-7808, client-side commands hang occasionally. There are cyclic dependencies in NetUtils and SecurityUtil class initialization. Upon initial look at the stack trace, two threads deadlock when they hit the either of class init the same time.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7971">HADOOP-7971</a>.
-     Blocker bug reported by tgraves and fixed by prashant_ <br>
-     <b>hadoop &lt;job/queue/pipes&gt; removed - should be added back, but deprecated</b><br>
-     <blockquote>The mapred subcommands (mradmin|jobtracker|tasktracker|pipes|job|queue)<br> were removed from the /bin/hadoop command. I believe for backwards compatibility at least some of these should have stayed along with the deprecated warnings.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7974">HADOOP-7974</a>.
-     Major bug reported by eli and fixed by qwertymaniac (fs)<br>
-     <b>TestViewFsTrash incorrectly determines the user&apos;s home directory</b><br>
-     <blockquote>HADOOP-7284 added a test called TestViewFsTrash which contains the following code to determine the user&apos;s home directory. It only works if the user&apos;s directory is one level deep, and breaks if the home directory is more than one level deep (eg user hudson, who&apos;s home dir might be /usr/lib/hudson instead of /home/hudson).<br><br>{code}<br>    // create a link for home directory so that trash path works<br>    // set up viewfs&apos;s home dir root to point to home dir root on target<br>    // But home dir is diffe...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7975">HADOOP-7975</a>.
-     Minor bug reported by qwertymaniac and fixed by qwertymaniac <br>
-     <b>Add entry to XML defaults for new LZ4 codec</b><br>
-     <blockquote>HADOOP-7657 added in a new LZ4 codec, but failed to extend the io.compression.codecs list which MR/etc. use up to load codecs.<br><br>We should add an entry to the core-default XML for this new codec, just as we did with Snappy.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7981">HADOOP-7981</a>.
-     Major bug reported by jeagles and fixed by jeagles (io)<br>
-     <b>Improve documentation for org.apache.hadoop.io.compress.Decompressor.getRemaining</b><br>
-     <blockquote></blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7982">HADOOP-7982</a>.
-     Major bug reported by tlipcon and fixed by tlipcon (security)<br>
-     <b>UserGroupInformation fails to login if thread&apos;s context classloader can&apos;t load HadoopLoginModule</b><br>
-     <blockquote>In a few hard-to-reproduce situations, we&apos;ve seen a problem where the UGI login call causes a failure to login exception with the following cause:<br><br>Caused by: javax.security.auth.login.LoginException: unable to find <br>LoginModule class: org.apache.hadoop.security.UserGroupInformation <br>$HadoopLoginModule<br><br>After a bunch of debugging, I determined that this happens when the login occurs in a thread whose Context ClassLoader has been set to null.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7987">HADOOP-7987</a>.
-     Major improvement reported by devaraj and fixed by jnp (security)<br>
-     <b>Support setting the run-as user in unsecure mode</b><br>
-     <blockquote>Some applications need to be able to perform actions (such as launch MR jobs) from map or reduce tasks. In earlier unsecure versions of hadoop (20.x), it was possible to do this by setting user.name in the configuration. But in 20.205 and 1.0, when running in unsecure mode, this does not work. (In secure mode, you can do this using the kerberos credentials).</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7988">HADOOP-7988</a>.
-     Major bug reported by jnp and fixed by jnp <br>
-     <b>Upper case in hostname part of the principals doesn&apos;t work with kerberos.</b><br>
-     <blockquote>Kerberos doesn&apos;t like upper case in the hostname part of the principals.<br>This issue has been seen in 23 as well as 1.0.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7993">HADOOP-7993</a>.
-     Major bug reported by anupamseth and fixed by anupamseth (conf)<br>
-     <b>Hadoop ignores old-style config options for enabling compressed output</b><br>
-     <blockquote>Hadoop seems to ignore the config options even though they are printed as deprecation warnings in the log: mapred.output.compress and<br>mapred.output.compression.codec<br><br>- settings that work on 0.20 but not on 0.23<br>mapred.output.compress=true<br>mapred.output.compression.codec=org.apache.hadoop.io.compress.BZip2Codec<br><br>- settings that work on 0.23<br>mapreduce.output.fileoutputformat.compress=true<br>mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.BZip2Codec<br><br>This breaks bac...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7997">HADOOP-7997</a>.
-     Major bug reported by gchanan and fixed by gchanan (io)<br>
-     <b>SequenceFile.createWriter(...createParent...) no longer works on existing file</b><br>
-     <blockquote>SequenceFile.createWriter no longer works on an existing file, because old version specified OVEWRITE by default and new version does not.  This breaks some HBase tests.<br><br>Tested against trunk.<br><br>Patch with test to follow.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7998">HADOOP-7998</a>.
-     Major bug reported by daryn and fixed by daryn (fs)<br>
-     <b>CheckFileSystem does not correctly honor setVerifyChecksum</b><br>
-     <blockquote>Regardless of the verify checksum flag, {{ChecksumFileSystem#open}} will instantiate a {{ChecksumFSInputChecker}} instead of a normal stream.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7999">HADOOP-7999</a>.
-     Critical bug reported by jlowe and fixed by jlowe (scripts)<br>
-     <b>&quot;hadoop archive&quot; fails with ClassNotFoundException</b><br>
-     <blockquote>Running &quot;hadoop archive&quot; from a command prompt results in this error:<br><br>Exception in thread &quot;main&quot; java.lang.NoClassDefFoundError: org/apache/hadoop/tools/HadoopArchives<br>Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.tools.HadoopArchives<br>	at java.net.URLClassLoader$1.run(URLClassLoader.java:202)<br>	at java.security.AccessController.doPrivileged(Native Method)<br>	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)<br>	at java.lang.ClassLoader.loadClass(ClassLoader.java:306)<br>	...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8000">HADOOP-8000</a>.
-     Critical bug reported by arpitgupta and fixed by arpitgupta <br>
-     <b>fetchdt command not available in bin/hadoop</b><br>
-     <blockquote>fetchdt command needs to be added to bin/hadoop to allow for backwards compatibility.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8001">HADOOP-8001</a>.
-     Major bug reported by daryn and fixed by daryn (fs)<br>
-     <b>ChecksumFileSystem&apos;s rename doesn&apos;t correctly handle checksum files</b><br>
-     <blockquote>Rename will move the src file and its crc *if present* to the destination.  If the src file has no crc, but the destination already exists with a crc, then src will be associated with the old file&apos;s crc.  Subsequent access to the file will fail with checksum errors.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8002">HADOOP-8002</a>.
-     Major bug reported by arpitgupta and fixed by arpitgupta <br>
-     <b>SecurityUtil acquired token message should be a debug rather than info</b><br>
-     <blockquote></blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8006">HADOOP-8006</a>.
-     Major bug reported by umamaheswararao and fixed by daryn (fs)<br>
-     <b>TestFSInputChecker is failing in trunk.</b><br>
-     <blockquote>Trunk build number 939 failed with TestFSInputChecker.<br>https://builds.apache.org/job/Hadoop-Hdfs-trunk/939/<br><br>junit.framework.AssertionFailedError: expected:&lt;10&gt; but was:&lt;0&gt;<br>	at junit.framework.Assert.fail(Assert.java:47)<br>	at junit.framework.Assert.failNotEquals(Assert.java:283)<br>	at junit.framework.Assert.assertEquals(Assert.java:64)<br>	at junit.framework.Assert.assertEquals(Assert.java:130)<br>	at junit.framework.Assert.assertEquals(Assert.java:136)<br>	at org.apache.hadoop.hdfs.TestFSInputChecker.ch...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8009">HADOOP-8009</a>.
-     Critical improvement reported by tucu00 and fixed by tucu00 (build)<br>
-     <b>Create hadoop-client and hadoop-minicluster artifacts for downstream projects </b><br>
-     <blockquote>Using Hadoop from projects like Pig/Hive/Sqoop/Flume/Oozie or any in-house system that interacts with Hadoop is quite challenging for the following reasons:<br><br>* *Different versions of Hadoop produce different artifacts:* Before Hadoop 0.23 there was a single artifact hadoop-core, starting with Hadoop 0.23 there are several (common, hdfs, mapred*, yarn*)<br><br>* *There are no &apos;client&apos; artifacts:* Current artifacts include all JARs needed to run the services, thus bringing into clients several JARs t...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8012">HADOOP-8012</a>.
-     Minor bug reported by rvs and fixed by rvs (scripts)<br>
-     <b>hadoop-daemon.sh and yarn-daemon.sh are trying to mkdir and chow log/pid dirs which can fail</b><br>
-     <blockquote>Here&apos;s what I see when using Hadoop in Bigtop:<br><br>{noformat}<br>$ sudo /sbin/service hadoop-hdfs-namenode start<br>Starting Hadoop namenode daemon (hadoop-namenode): chown: changing ownership of `/var/log/hadoop&apos;: Operation not permitted<br>starting namenode, logging to /var/log/hadoop/hadoop-hdfs-namenode-centos5.out<br>{noformat}<br><br>This is a cosmetic issue, but it would be nice to fix it.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8013">HADOOP-8013</a>.
-     Major bug reported by daryn and fixed by daryn (fs)<br>
-     <b>ViewFileSystem does not honor setVerifyChecksum</b><br>
-     <blockquote>{{ViewFileSystem#setVerifyChecksum}} is a no-op.  It should call {{setVerifyChecksum}} on the mount points.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8015">HADOOP-8015</a>.
-     Major improvement reported by daryn and fixed by daryn (fs)<br>
-     <b>ChRootFileSystem should extend FilterFileSystem</b><br>
-     <blockquote>{{ChRootFileSystem}} simply extends {{FileSystem}}, and attempts to delegate some methods to the underlying mount point.  It is essentially the same as {{FilterFileSystem}} but it mangles the paths to include the chroot path.  Unfortunately {{ChRootFileSystem}} is not delegating some methods that should be delegated.  Changing the inheritance will prevent a copy-n-paste of code for HADOOP-8013 and HADOOP-8014 into both {{ChRootFileSystem}} and {{FilterFileSystem}}.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8018">HADOOP-8018</a>.
-     Major bug reported by mattf and fixed by jeagles (build, test)<br>
-     <b>Hudson auto test for HDFS has started throwing javadoc: warning - Error fetching URL: http://java.sun.com/javase/6/docs/api/package-list</b><br>
-     <blockquote>Hudson automated testing has started failing with one javadoc warning message, consisting of<br>javadoc: warning - Error fetching URL: http://java.sun.com/javase/6/docs/api/package-list<br><br>This may be due to Oracle&apos;s decommissioning of the sun.com domain.  If one tries to access it manually, it is redirected to <br>http://download.oracle.com/javase/6/docs/api/package-list<br><br>So it looks like a build script needs to be updated.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8027">HADOOP-8027</a>.
-     Minor improvement reported by qwertymaniac and fixed by atm (metrics)<br>
-     <b>Visiting /jmx on the daemon web interfaces may print unnecessary error in logs</b><br>
-     <blockquote>Logs that follow a {{/jmx}} servlet visit:<br><br>{code}<br>11/11/22 12:09:52 ERROR jmx.JMXJsonServlet: getting attribute UsageThreshold of java.lang:type=MemoryPool,name=Par Eden Space threw an exception<br>javax.management.RuntimeMBeanException: java.lang.UnsupportedOperationException: Usage threshold is not supported<br>	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:856)<br>...<br>{code}</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-69">HDFS-69</a>.
-     Minor bug reported by raviphulari and fixed by qwertymaniac <br>
-     <b>Improve dfsadmin command line help </b><br>
-     <blockquote>Enhance dfsadmin command line help informing &quot;A quota of one forces a directory to remain empty&quot; </blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-362">HDFS-362</a>.
-     Major improvement reported by szetszwo and fixed by umamaheswararao (name-node)<br>
-     <b>FSEditLog should not writes long and short as UTF8 and should not use ArrayWritable for writing non-array items</b><br>
-     <blockquote>In FSEditLog, <br><br>- long and short are first converted to String and are further converted to UTF8<br><br>- For some non-array items, it first create an ArrayWritable object to hold all the items and then writes the ArrayWritable object.<br><br>These result creating many intermediate objects which affects Namenode CPU performance and Namenode restart.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-442">HDFS-442</a>.
-     Minor bug reported by rramya and fixed by qwertymaniac (test)<br>
-     <b>dfsthroughput in test.jar throws NPE</b><br>
-     <blockquote>On running hadoop jar hadoop-test.jar dfsthroughput OR hadoop org.apache.hadoop.hdfs.BenchmarkThroughput, we get NullPointerException. Below is the stacktrace:<br>{noformat}<br>Exception in thread &quot;main&quot; java.lang.NullPointerException<br>        at java.util.Hashtable.put(Hashtable.java:394)<br>        at java.util.Properties.setProperty(Properties.java:143)<br>        at java.lang.System.setProperty(System.java:731)<br>        at org.apache.hadoop.hdfs.BenchmarkThroughput.run(BenchmarkThroughput.java:198)<br>   ...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-554">HDFS-554</a>.
-     Minor improvement reported by stevel@apache.org and fixed by qwertymaniac (name-node)<br>
-     <b>BlockInfo.ensureCapacity may get a speedup from System.arraycopy()</b><br>
-     <blockquote>BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into the expanded array.  {{System.arraycopy()}} is generally much faster for this, as it can do a bulk memory copy. There is also the typesafe Java6 {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2178">HDFS-2178</a>.
-     Major improvement reported by tucu00 and fixed by tucu00 <br>
-     <b>HttpFS - a read/write Hadoop file system proxy</b><br>
-     <blockquote>We&apos;d like to contribute Hoop to Hadoop HDFS as a replacement (an improvement) for HDFS Proxy.<br><br>Hoop provides access to all Hadoop Distributed File System (HDFS) operations (read and write) over HTTP/S.<br><br>The Hoop server component is a REST HTTP gateway to HDFS supporting all file system operations. It can be accessed using standard HTTP tools (i.e. curl and wget), HTTP libraries from different programing languages (i.e. Perl, Java Script) as well as using the Hoop client. The Hoop server compo...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2335">HDFS-2335</a>.
-     Major improvement reported by eli and fixed by umamaheswararao (data-node, name-node)<br>
-     <b>DataNodeCluster and NNStorage always pull fresh entropy</b><br>
-     <blockquote>Jira for giving DataNodeCluster and NNStorage the same treatment as HDFS-1835. They&apos;re not truly cryptographic uses as well. We should also factor this out to a utility method, seems like the three uses are slightly different, eg one uses DFSUtil.getRandom and the other creates a new Random object.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2349">HDFS-2349</a>.
-     Trivial improvement reported by qwertymaniac and fixed by qwertymaniac (data-node)<br>
-     <b>DN should log a WARN, not an INFO when it detects a corruption during block transfer</b><br>
-     <blockquote>Currently, in DataNode.java, we have:<br><br>{code}<br><br>      LOG.info(&quot;Can&apos;t replicate block &quot; + block<br>          + &quot; because on-disk length &quot; + onDiskLength <br>          + &quot; is shorter than NameNode recorded length &quot; + block.getNumBytes());<br><br>{code}<br><br>This log is better off as a WARN as it indicates (and also reports) a corruption.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2397">HDFS-2397</a>.
-     Major improvement reported by tlipcon and fixed by eli (name-node)<br>
-     <b>Undeprecate SecondaryNameNode</b><br>
-     <blockquote>I would like to consider un-deprecating the SecondaryNameNode for 0.23, and amending the documentation to indicate that it is still the most trust-worthy way to run checkpoints, and while CN/BN may have some advantages, they&apos;re not battle hardened as of yet. The test coverage for the 2NN is far superior to the CheckpointNode or BackupNode, and people have a lot more production experience. Indicating that it is deprecated before we have expanded test coverage of the CN/BN won&apos;t send the right ...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2454">HDFS-2454</a>.
-     Minor improvement reported by umamaheswararao and fixed by qwertymaniac (data-node)<br>
-     <b>Move maxXceiverCount check to before starting the thread in dataXceiver</b><br>
-     <blockquote>We can hoist the maxXceiverCount out of DataXceiverServer#run, there&apos;s no need to check each time we accept a connection, we can accept when we create a thread.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2502">HDFS-2502</a>.
-     Minor improvement reported by eli and fixed by qwertymaniac (documentation)<br>
-     <b>hdfs-default.xml should include dfs.name.dir.restore</b><br>
-     <blockquote></blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2511">HDFS-2511</a>.
-     Minor improvement reported by tlipcon and fixed by tucu00 (build)<br>
-     <b>Add dev script to generate HDFS protobufs</b><br>
-     <blockquote>Would like to add a simple shell script to re-generate the protobuf code in HDFS -- just easier than remembering the right syntax.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2533">HDFS-2533</a>.
-     Minor improvement reported by tlipcon and fixed by tlipcon (data-node, performance)<br>
-     <b>Remove needless synchronization on FSDataSet.getBlockFile</b><br>
-     <blockquote>HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2536">HDFS-2536</a>.
-     Trivial improvement reported by atm and fixed by qwertymaniac (name-node)<br>
-     <b>Remove unused imports</b><br>
-     <blockquote>Looks like it has 11 unused imports by my count.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2541">HDFS-2541</a>.
-     Major bug reported by qwertymaniac and fixed by qwertymaniac (data-node)<br>
-     <b>For a sufficiently large value of blocks, the DN Scanner may request a random number with a negative seed value.</b><br>
-     <blockquote>Running off 0.20-security, I noticed that one could get the following exception when scanners are used:<br><br>{code}<br>DataXceiver <br>java.lang.IllegalArgumentException: n must be positive <br>at java.util.Random.nextInt(Random.java:250) <br>at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNewBlockScanTime(DataBlockScanner.java:251) <br>at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:268) <br>at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(Da...</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2543">HDFS-2543</a>.
-     Major bug reported by bmahe and fixed by bmahe (scripts)<br>
-     <b>HADOOP_PREFIX cannot be overriden</b><br>
-     <blockquote>hadoop-config.sh forces HADOOP_prefix to a specific value:<br>export HADOOP_PREFIX=`dirname &quot;$this&quot;`/..<br><br>It would be nice to make this overridable.<br></blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2544">HDFS-2544</a>.
-     Major bug reported by bmahe and fixed by bmahe (scripts)<br>
-     <b>Hadoop scripts unconditionally source &quot;$bin&quot;/../libexec/hadoop-config.sh.</b><br>
-     <blockquote>It would be nice to be able to specify some other location for hadoop-config.sh</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2545">HDFS-2545</a>.
-     Major bug reported by szetszwo and fixed by szetszwo <br>
-     <b>Webhdfs: Support multiple namenodes in federation</b><br>
-     <blockquote>DatanodeWebHdfsMethods only talks to the default namenode.  It won&apos;t work if there are multiple namenodes in federation.</blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2552">HDFS-2552</a>.
-     Major task reported by szetszwo and fixed by szetszwo (documentation)<br>
-     <b>Add WebHdfs Forrest doc</b><br>
-     <blockquote></blockquote></li>
-
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-2553">HDFS-2553</a>.
-     Critical bug reported by tlipcon and fixed by umamaheswararao (data-node)<br>
-     <b>BlockPoolSliceScanner spinning in loop</b><br>

[... 6810 lines stripped ...]