You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by co...@apache.org on 2013/08/11 02:42:05 UTC

svn commit: r1512842 [2/2] - /hadoop/common/branches/branch-2.0.6-alpha/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html

Modified: hadoop/common/branches/branch-2.0.6-alpha/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2.0.6-alpha/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html?rev=1512842&r1=1512841&r2=1512842&view=diff
==============================================================================
--- hadoop/common/branches/branch-2.0.6-alpha/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html (original)
+++ hadoop/common/branches/branch-2.0.6-alpha/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html Sun Aug 11 00:42:04 2013
@@ -1,8858 +1,5 @@
 <META http-equiv="Content-Type" content="text/html; charset=UTF-8">
-<title>Hadoop  2.0.5-alpha Release Notes</title>
-<STYLE type="text/css">
-	H1 {font-family: sans-serif}
-	H2 {font-family: sans-serif; margin-left: 7mm}
-	TABLE {margin-left: 7mm}
-</STYLE>
-</head>
-<body><html>
-<h1>Hadoop  2.0.5-alpha Release Notes</h1>
-These release notes include new developer and user-facing incompatibilities, features, and major improvements. 
-<a name="changes"/>
-<h2>Changes since Hadoop 2.0.4-alpha</h2>
-<ul>
-<li> <a href='https://issues.apache.org/jira/browse/HADOOP-9407'>HADOOP-9407</a>.
-     commons-daemon 1.0.3 dependency has bad group id causing build issues</br>
-     <blockquote>Committed to branch-2.0.5 Modified changes.txt in trunk, branch-2 and branch-2.0.5-alpha accordingly.</blockquote></li>
-<li> <a href='https://issues.apache.org/jira/browse/MAPREDUCE-5240'>MAPREDUCE-5240</a>.
-     inside of FileOutputCommitter the initialized Credentials cache appears to be empty</br>
-     <blockquote>Committed to branch-2.0.5 Modified changes.txt in trunk, branch-2 and branch-2.0.5-alpha accordingly.</blockquote></li>
-</li>
-</ul>
-</body></html>
-<h1>Hadoop  2.0.4-alpha Release Notes</h1>
-These release notes include new developer and user-facing incompatibilities, features, and major improvements. 
-<a name="changes"/>
-<h2>Changes since Hadoop 2.0.3-alpha</h2>
-<ul>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-470">YARN-470</a>.
-     Major bug reported by Hitesh Shah and fixed by Siddharth Seth (nodemanager)<br>
-     <b>Support a way to disable resource monitoring on the NodeManager</b><br>
-     <blockquote>Currently, the memory management monitor's check is disabled when the maxMem is set to -1. However, the maxMem is also sent to the RM when the NM registers with it ( to define the max limit of allocate-able resources ). 
-
-We need an explicit flag to disable monitoring to avoid the problems caused by the overloading of the max memory value.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-449">YARN-449</a>.
-     Blocker bug reported by Siddharth Seth and fixed by  <br>
-     <b>HBase test failures when running against Hadoop 2</b><br>
-     <blockquote>Post YARN-429, unit tests for HBase continue to fail since the classpath for the MRAppMaster is not being set correctly.
-Reverting YARN-129 may fix this, but I'm not sure that's the correct solution. My guess is, as Alexandro pointed out in YARN-129, maven classloader magic is messing up java.class.path.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-443">YARN-443</a>.
-     Major improvement reported by Thomas Graves and fixed by Thomas Graves (nodemanager)<br>
-     <b>allow OS scheduling priority of NM to be different than the containers it launches</b><br>
-     <blockquote>It would be nice if we could have the nodemanager run at a different OS scheduling priority than the containers so that you can still communicate with the nodemanager if the containers out of control.  
-
-On linux we could launch the nodemanager at a higher priority, but then all the containers it launches would also be at that higher priority, so we need a way for the container executor to launch them at a lower priority.
-
-I'm not sure how this applies to windows if at all.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-429">YARN-429</a>.
-     Blocker bug reported by Siddharth Seth and fixed by Siddharth Seth (resourcemanager)<br>
-     <b>capacity-scheduler config missing from yarn-test artifact</b><br>
-     <blockquote>MiniYARNCluster and MiniMRCluster are unusable by downstream projects with the 2.0.3-alpha release, since the capacity-scheduler configuration is missing from the test artifact.
-hadoop-yarn-server-tests-3.0.0-SNAPSHOT-tests.jar should include the default capacity-scheduler configuration. Also, this doesn't need to be part of the default classpath - and should be moved out of the top level directory in the dist package.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5117">MAPREDUCE-5117</a>.
-     Blocker bug reported by Roman Shaposhnik and fixed by Siddharth Seth (security)<br>
-     <b>With security enabled HS delegation token renewer fails</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5094">MAPREDUCE-5094</a>.
-     Major bug reported by Siddharth Seth and fixed by Siddharth Seth <br>
-     <b>Disable mem monitoring by default in MiniMRYarnCluster</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5088">MAPREDUCE-5088</a>.
-     Blocker bug reported by Roman Shaposhnik and fixed by Daryn Sharp <br>
-     <b>MR Client gets an renewer token exception while Oozie is submitting a job</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5083">MAPREDUCE-5083</a>.
-     Major bug reported by Siddharth Seth and fixed by Siddharth Seth (mrv2)<br>
-     <b>MiniMRCluster should use a random component when creating an actual cluster</b><br>
-     <blockquote>Committed to branch-2.0.4. Modified changes.txt in trunk, branch-2 and branch-2.0.4 accordingly.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5053">MAPREDUCE-5053</a>.
-     Major bug reported by Robert Parker and fixed by Robert Parker <br>
-     <b>java.lang.InternalError from decompression codec cause reducer to fail</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5023">MAPREDUCE-5023</a>.
-     Critical bug reported by Kendall Thrapp and fixed by Ravi Prakash (jobhistoryserver , webapps)<br>
-     <b>History Server Web Services missing Job Counters</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5006">MAPREDUCE-5006</a>.
-     Major bug reported by Alejandro Abdelnur and fixed by Sandy Ryza (contrib/streaming)<br>
-     <b>streaming tests failing</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4549">MAPREDUCE-4549</a>.
-     Blocker bug reported by Robert Joseph Evans and fixed by Robert Joseph Evans (mrv2)<br>
-     <b>Distributed cache conflicts breaks backwards compatability</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-4649">HDFS-4649</a>.
-     Blocker bug reported by Daryn Sharp and fixed by Daryn Sharp (namenode , security , webhdfs)<br>
-     <b>Webhdfs cannot list large directories</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-4646">HDFS-4646</a>.
-     Minor bug reported by Jagane Sundar and fixed by  (namenode)<br>
-     <b>createNNProxyWithClientProtocol ignores configured timeout value</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-4581">HDFS-4581</a>.
-     Major bug reported by Rohit Kochar and fixed by Rohit Kochar (datanode)<br>
-     <b>DataNode#checkDiskError should not be called on network errors</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-4577">HDFS-4577</a>.
-     Major sub-task reported by Daryn Sharp and fixed by Daryn Sharp (webhdfs)<br>
-     <b>Webhdfs operations should declare if authentication is required</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-4571">HDFS-4571</a>.
-     Major bug reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (webhdfs)<br>
-     <b>WebHDFS should not set the service hostname on the server side</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-4567">HDFS-4567</a>.
-     Major sub-task reported by Daryn Sharp and fixed by Daryn Sharp (webhdfs)<br>
-     <b>Webhdfs does not need a token for token operations</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-4566">HDFS-4566</a>.
-     Major sub-task reported by Daryn Sharp and fixed by Daryn Sharp (webhdfs)<br>
-     <b>Webdhfs token cancelation should use authentication</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-4560">HDFS-4560</a>.
-     Major sub-task reported by Daryn Sharp and fixed by Daryn Sharp (webhdfs)<br>
-     <b>Webhdfs cannot use tokens obtained by another user</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-4548">HDFS-4548</a>.
-     Blocker sub-task reported by Daryn Sharp and fixed by Daryn Sharp <br>
-     <b>Webhdfs doesn't renegotiate SPNEGO token</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HDFS-3344">HDFS-3344</a>.
-     Major bug reported by Tsz Wo (Nicholas), SZE and fixed by Kihwal Lee (namenode)<br>
-     <b>Unreliable corrupt blocks counting in TestProcessCorruptBlocks</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9471">HADOOP-9471</a>.
-     Major bug reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (build)<br>
-     <b>hadoop-client wrongfully excludes jetty-util JAR, breaking webhdfs</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9467">HADOOP-9467</a>.
-     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (metrics)<br>
-     <b>Metrics2 record filtering (.record.filter.include/exclude) does not filter by name</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9444">HADOOP-9444</a>.
-     Blocker bug reported by Konstantin Boudnik and fixed by Roman Shaposhnik (conf)<br>
-     <b>$var shell substitution in properties are not expanded in hadoop-policy.xml</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9406">HADOOP-9406</a>.
-     Major bug reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (build)<br>
-     <b>hadoop-client leaks dependency on JDK tools jar</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9405">HADOOP-9405</a>.
-     Minor bug reported by Andrew Wang and fixed by Andrew Wang (test , tools)<br>
-     <b>TestGridmixSummary#testExecutionSummarizer is broken</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9399">HADOOP-9399</a>.
-     Minor bug reported by Todd Lipcon and fixed by Konstantin Boudnik (build)<br>
-     <b>protoc maven plugin doesn't work on mvn 3.0.2</b><br>
-     <blockquote>Committed to 2.0.4-alpha branch</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9379">HADOOP-9379</a>.
-     Trivial improvement reported by Arpit Gupta and fixed by Arpit Gupta <br>
-     <b>capture the ulimit info after printing the log to the console</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9374">HADOOP-9374</a>.
-     Major improvement reported by Daryn Sharp and fixed by Daryn Sharp (security)<br>
-     <b>Add tokens from -tokenCacheFile into UGI</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9301">HADOOP-9301</a>.
-     Blocker bug reported by Roman Shaposhnik and fixed by Alejandro Abdelnur (build)<br>
-     <b>hadoop client servlet/jsp/jetty/tomcat JARs creating conflicts in Oozie &amp; HttpFS</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9299">HADOOP-9299</a>.
-     Blocker bug reported by Roman Shaposhnik and fixed by Daryn Sharp (security)<br>
-     <b>kerberos name resolution is kicking in even when kerberos is not configured</b><br>
-     <blockquote></blockquote></li>
-</ul>
-</body></html>
-<META http-equiv="Content-Type" content="text/html; charset=UTF-8">
-<title>Hadoop  2.0.3-alpha Release Notes</title>
-<STYLE type="text/css">
-	H1 {font-family: sans-serif}
-	H2 {font-family: sans-serif; margin-left: 7mm}
-	TABLE {margin-left: 7mm}
-</STYLE>
-</head>
-<body>
-<h1>Hadoop  2.0.3-alpha Release Notes</h1>
-These release notes include new developer and user-facing incompatibilities, features, and major improvements. 
-<a name="changes"/>
-<h2>Changes since Hadoop 2.0.2</h2>
-<ul>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-372">YARN-372</a>.
-     Minor task reported by Siddharth Seth and fixed by Siddharth Seth <br>
-     <b>Move InlineDispatcher from hadoop-yarn-server-resourcemanager to hadoop-yarn-common</b><br>
-     <blockquote>InlineDispatcher is a utility used in unit tests. Belongs in yarn-common instead of yarn-server-resource-manager.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-364">YARN-364</a>.
-     Major bug reported by Jason Lowe and fixed by Jason Lowe <br>
-     <b>AggregatedLogDeletionService can take too long to delete logs</b><br>
-     <blockquote>AggregatedLogDeletionService uses the yarn.log-aggregation.retain-seconds property to determine which logs should be deleted, but it uses the same value to determine how often to check for old logs.  This means logs could actually linger up to twice as long as configured.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-360">YARN-360</a>.
-     Critical bug reported by Daryn Sharp and fixed by Daryn Sharp <br>
-     <b>Allow apps to concurrently register tokens for renewal</b><br>
-     <blockquote>{{DelegationTokenRenewer#addApplication}} has an unnecessary {{synchronized}} keyword.  This serializes job submissions and can add unnecessary latency and/or hang all submissions if there are problems renewing the token.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-357">YARN-357</a>.
-     Major bug reported by Daryn Sharp and fixed by Daryn Sharp (resourcemanager)<br>
-     <b>App submission should not be synchronized</b><br>
-     <blockquote>MAPREDUCE-2953 fixed a race condition with querying of app status by making {{RMClientService#submitApplication}} synchronously invoke {{RMAppManager#submitApplication}}. However, the {{synchronized}} keyword was also added to {{RMAppManager#submitApplication}} with the comment:
-bq. I made the submitApplication synchronized to keep it consistent with the other routines in RMAppManager although I do not believe it needs it since the rmapp datastructure is already a concurrentMap and I don't see anything else that would be an issue.
-
-It's been observed that app submission latency is being unnecessarily impacted.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-355">YARN-355</a>.
-     Blocker bug reported by Daryn Sharp and fixed by Daryn Sharp (resourcemanager)<br>
-     <b>RM app submission jams under load</b><br>
-     <blockquote>The RM performs a loopback connection to itself to renew its own tokens.  If app submissions consume all RPC handlers for {{ClientRMProtocol}}, then app submissions block because it cannot loopback to itself to do the renewal.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-354">YARN-354</a>.
-     Blocker bug reported by Liang Xie and fixed by Liang Xie <br>
-     <b>WebAppProxyServer exits immediately after startup</b><br>
-     <blockquote>Please see HDFS-4426 for detail, i found the yarn WebAppProxyServer is broken by HADOOP-9181 as well, here's the hot fix, and i verified manually in our test cluster.
-
-I'm really applogized for bring about such trouble...</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-343">YARN-343</a>.
-     Major bug reported by Thomas Graves and fixed by Xuan Gong (capacityscheduler)<br>
-     <b>Capacity Scheduler maximum-capacity value -1 is invalid</b><br>
-     <blockquote>I tried to start the resource manager using the capacity scheduler with a particular queues maximum-capacity set to -1 which is supposed to disable it according to the docs but I got the following exception:
-
-java.lang.IllegalArgumentException: Illegal value  of maximumCapacity -0.01 used in call to setMaxCapacity for queue foo
-    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueUtils.checkMaxCapacity(CSQueueUtils.java:31)
-    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.setupQueueConfigs(LeafQueue.java:220)
-    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.&lt;init&gt;(LeafQueue.java:191)
-    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.parseQueue(CapacityScheduler.java:310)
-    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.parseQueue(CapacityScheduler.java:325)
-    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:232)
-    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:202)
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-336">YARN-336</a>.
-     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (scheduler)<br>
-     <b>Fair scheduler FIFO scheduling within a queue only allows 1 app at a time </b><br>
-     <blockquote>The fair scheduler allows apps to be scheduled in FIFO fashion within a queue.  Currently, when this setting is turned on, the scheduler only allows one app to run at a time.  While apps submitted earlier should get first priority for allocations, when there is space remaining, other apps should have a change to get at them.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-334">YARN-334</a>.
-     Critical bug reported by Thomas Graves and fixed by Thomas Graves <br>
-     <b>Maven RAT plugin is not checking all source files</b><br>
-     <blockquote>yarn side of HADOOP-9097
-
-
-
-Running 'mvn apache-rat:check' passes, but running RAT by hand (by downloading the JAR) produces some warnings for Java files, amongst others.
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-331">YARN-331</a>.
-     Major improvement reported by Sandy Ryza and fixed by Sandy Ryza (scheduler)<br>
-     <b>Fill in missing fair scheduler documentation</b><br>
-     <blockquote>In the fair scheduler documentation, a few config options are missing:
-locality.threshold.node
-locality.threshold.rack
-max.assign
-aclSubmitApps
-minSharePreemptionTimeout
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-330">YARN-330</a>.
-     Major bug reported by Hitesh Shah and fixed by Sandy Ryza (nodemanager)<br>
-     <b>Flakey test: TestNodeManagerShutdown#testKillContainersOnShutdown</b><br>
-     <blockquote>=Seems to be timing related as the container status RUNNING as returned by the ContainerManager does not really indicate that the container task has been launched. Sleep of 5 seconds is not reliable. 
-
-Running org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown
-Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.353 sec &lt;&lt;&lt; FAILURE!
-testKillContainersOnShutdown(org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown)  Time elapsed: 9283 sec  &lt;&lt;&lt; FAILURE!
-junit.framework.AssertionFailedError: Did not find sigterm message
-	at junit.framework.Assert.fail(Assert.java:47)
-	at junit.framework.Assert.assertTrue(Assert.java:20)
-	at org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown.testKillContainersOnShutdown(TestNodeManagerShutdown.java:162)
-
-Logs:
-
-2013-01-09 14:13:08,401 INFO  [AsyncDispatcher event handler] container.Container (ContainerImpl.java:handle(835)) - Container container_0_0000_01_000000 transitioned from NEW to LOCALIZING
-2013-01-09 14:13:08,412 INFO  [AsyncDispatcher event handler] localizer.LocalizedResource (LocalizedResource.java:handle(194)) - Resource file:hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/tmpDir/scriptFile.sh transitioned from INIT to DOWNLOADING
-2013-01-09 14:13:08,412 INFO  [AsyncDispatcher event handler] localizer.ResourceLocalizationService (ResourceLocalizationService.java:handle(521)) - Created localizer for container_0_0000_01_000000
-2013-01-09 14:13:08,589 INFO  [LocalizerRunner for container_0_0000_01_000000] localizer.ResourceLocalizationService (ResourceLocalizationService.java:writeCredentials(895)) - Writing credentials to the nmPrivate file hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/nmPrivate/container_0_0000_01_000000.tokens. Credentials list:
-2013-01-09 14:13:08,628 INFO  [LocalizerRunner for container_0_0000_01_000000] nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:createUserCacheDirs(373)) - Initializing user nobody
-2013-01-09 14:13:08,709 INFO  [main] containermanager.ContainerManagerImpl (ContainerManagerImpl.java:getContainerStatus(538)) - Returning container_id {, app_attempt_id {, application_id {, id: 0, cluster_timestamp: 0, }, attemptId: 1, }, }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
-2013-01-09 14:13:08,781 INFO  [LocalizerRunner for container_0_0000_01_000000] nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:startLocalizer(99)) - Copying from hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/nmPrivate/container_0_0000_01_000000.tokens to hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/usercache/nobody/appcache/application_0_0000/container_0_0000_01_000000.tokens
-
-
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-328">YARN-328</a>.
-     Major improvement reported by Suresh Srinivas and fixed by Suresh Srinivas (resourcemanager)<br>
-     <b>Use token request messages defined in hadoop common </b><br>
-     <blockquote>YARN changes related to HADOOP-9192 to reuse the protobuf messages defined in common.
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-325">YARN-325</a>.
-     Blocker bug reported by Jason Lowe and fixed by Arun C Murthy (capacityscheduler)<br>
-     <b>RM CapacityScheduler can deadlock when getQueueInfo() is called and a container is completing</b><br>
-     <blockquote>If a client calls getQueueInfo on a parent queue (e.g.: the root queue) and containers are completing then the RM can deadlock.  getQueueInfo() locks the ParentQueue and then calls the child queues' getQueueInfo() methods in turn.  However when a container completes, it locks the LeafQueue then calls back into the ParentQueue.  When the two mix, it's a recipe for deadlock.
-
-Stacktrace to follow.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-320">YARN-320</a>.
-     Blocker bug reported by Daryn Sharp and fixed by Daryn Sharp (resourcemanager)<br>
-     <b>RM should always be able to renew its own tokens</b><br>
-     <blockquote>YARN-280 introduced fast-fail for job submissions with bad tokens.  Unfortunately, other stack components like oozie and customers are acquiring RM tokens with a hardcoded dummy renewer value.  These jobs would fail after 24 hours because the RM token couldn't be renewed, but fast-fail is failing them immediately.  The RM should always be able to renew its own tokens submitted with a job.  The renewer field may continue to specify an external user who can renew.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-319">YARN-319</a>.
-     Major bug reported by shenhong and fixed by shenhong (resourcemanager , scheduler)<br>
-     <b>Submit a job to a queue that not allowed in fairScheduler, client will hold forever.</b><br>
-     <blockquote>RM use fairScheduler, when client submit a job to a queue, but the queue do not allow the user to submit job it, in this case, client  will hold forever.
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-315">YARN-315</a>.
-     Major improvement reported by Suresh Srinivas and fixed by Suresh Srinivas <br>
-     <b>Use security token protobuf definition from hadoop common</b><br>
-     <blockquote>YARN part of HADOOP-9173.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-302">YARN-302</a>.
-     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (resourcemanager , scheduler)<br>
-     <b>Fair scheduler assignmultiple should default to false</b><br>
-     <blockquote>The MR1 default was false.  When true, it results in overloading some machines with many tasks and underutilizing others.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-301">YARN-301</a>.
-     Major bug reported by shenhong and fixed by shenhong (resourcemanager , scheduler)<br>
-     <b>Fair scheduler throws ConcurrentModificationException when iterating over app's priorities</b><br>
-     <blockquote>In my test cluster, fairscheduler appear to concurrentModificationException and RM crash,  here is the message:
-
-2012-12-30 17:14:17,171 FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in handling event type NODE_UPDATE to the scheduler
-java.util.ConcurrentModificationException
-        at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1100)
-        at java.util.TreeMap$KeyIterator.next(TreeMap.java:1154)
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.assignContainer(AppSchedulable.java:297)
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:181)
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:780)
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:842)
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:98)
-        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:340)
-        at java.lang.Thread.run(Thread.java:662)
-
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-300">YARN-300</a>.
-     Major bug reported by shenhong and fixed by Sandy Ryza (resourcemanager , scheduler)<br>
-     <b>After YARN-271, fair scheduler can infinite loop and not schedule any application.</b><br>
-     <blockquote>After yarn-271, when yarn.scheduler.fair.max.assign&lt;=0, when a node was been reserved, fairScheduler will  infinite loop and not schedule any application.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-293">YARN-293</a>.
-     Critical bug reported by Devaraj K and fixed by Robert Joseph Evans (nodemanager)<br>
-     <b>Node Manager leaks LocalizerRunner object for every Container </b><br>
-     <blockquote>Node Manager creates a new LocalizerRunner object for every container and puts in ResourceLocalizationService.LocalizerTracker.privLocalizers map but it never removes from the map.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-288">YARN-288</a>.
-     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (resourcemanager , scheduler)<br>
-     <b>Fair scheduler queue doesn't accept any jobs when ACLs are configured.</b><br>
-     <blockquote>If a queue is configured with an ACL for who can submit jobs, no jobs are allowed, even if a user on the list tries.
-
-This is caused by using the scheduler thinking the user is "yarn", because it calls UserGroupInformation.getCurrentUser() instead of UserGroupInformation.createRemoteUser() with the given user name.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-286">YARN-286</a>.
-     Major new feature reported by Tom White and fixed by Tom White (applications)<br>
-     <b>Add a YARN ApplicationClassLoader</b><br>
-     <blockquote>Add a classloader that provides webapp-style class isolation for use by applications. This is the YARN part of MAPREDUCE-1700 (which was already developed in that JIRA).</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-285">YARN-285</a>.
-     Major improvement reported by Derek Dagit and fixed by Derek Dagit <br>
-     <b>RM should be able to provide a tracking link for apps that have already been purged</b><br>
-     <blockquote>As applications complete, the RM tracks their IDs in a completed list.  This list is routinely truncated to limit the total number of application remembered by the RM.
-
-When a user clicks the History for a job, either the browser is redirected to the application's tracking link obtained from the stored application instance.  But when the application has been purged from the RM, an error is displayed.
-
-In very busy clusters the rate at which applications complete can cause applications to be purged from the RM's internal list within hours, which breaks the proxy URLs users have saved for their jobs.
-
-We would like the RM to provide valid tracking links persist so that users are not frustrated by broken links.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-283">YARN-283</a>.
-     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (scheduler)<br>
-     <b>Fair scheduler fails to get queue info without root prefix</b><br>
-     <blockquote>If queue1 exists, and a client calls "mapred queue -info queue1", an exception is thrown.  If they use root.queue1, it works correctly.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-282">YARN-282</a>.
-     Major bug reported by Sandy Ryza and fixed by Sandy Ryza <br>
-     <b>Fair scheduler web UI double counts Apps Submitted</b><br>
-     <blockquote>Each app submitted is reported twice under "Apps Submitted"</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-280">YARN-280</a>.
-     Major sub-task reported by Daryn Sharp and fixed by Daryn Sharp (resourcemanager)<br>
-     <b>RM does not reject app submission with invalid tokens</b><br>
-     <blockquote>The RM will launch an app with invalid tokens.  The tasks will languish with failed connection retries, followed by task reattempts, followed by app reattempts.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-278">YARN-278</a>.
-     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (resourcemanager , scheduler)<br>
-     <b>Fair scheduler maxRunningApps config causes no apps to make progress</b><br>
-     <blockquote>This occurs because the scheduler erroneously chooses apps to offer resources to that are not runnable, then later decides they are not runnable, and doesn't try to give the resources to anyone else.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-277">YARN-277</a>.
-     Major improvement reported by Bikas Saha and fixed by Bikas Saha <br>
-     <b>Use AMRMClient in DistributedShell to exemplify the approach</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-272">YARN-272</a>.
-     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (scheduler)<br>
-     <b>Fair scheduler log messages try to print objects without overridden toString methods</b><br>
-     <blockquote>A lot of junk gets printed out like this:
-
-2012-12-11 17:31:52,998 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerApp: Application application_1355270529654_0003 reserved container org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl@324f0f97 on node host: c1416.hal.cloudera.com:46356 #containers=7 available=0 used=8192, currently has 4 at priority org.apache.hadoop.yarn.api.records.impl.pb.PriorityPBImpl@33; currentReservation 4096</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-271">YARN-271</a>.
-     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (resourcemanager , scheduler)<br>
-     <b>Fair scheduler hits IllegalStateException trying to reserve different apps on same node</b><br>
-     <blockquote>After the fair scheduler reserves a container on a node, it doesn't check for reservations it just made when trying to make more reservations during the same heartbeat.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-267">YARN-267</a>.
-     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (resourcemanager , scheduler)<br>
-     <b>Fix fair scheduler web UI</b><br>
-     <blockquote>The fair scheduler web UI was broken by MAPREDUCE-4720.  The queues area is not shown, and changes are required to still show the fair share inside the applications table.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-266">YARN-266</a>.
-     Critical bug reported by Ravi Prakash and fixed by Ravi Prakash (resourcemanager)<br>
-     <b>RM and JHS Web UIs are blank because AppsBlock is not escaping string properly</b><br>
-     <blockquote>e.g. Job names with a line feed "\n" are causing a line feed in the JSON array being written out (since we are only using StringEscapeUtils.escapeHtml() ) and the Javascript parser complains that string quotes are unclosed. This </blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-264">YARN-264</a>.
-     Major bug reported by Karthik Kambatla and fixed by Karthik Kambatla <br>
-     <b>y.s.rm.DelegationTokenRenewer attempts to renew token even after removing an app</b><br>
-     <blockquote>yarn.s.rm.security.DelegationTokenRenewer uses TimerTask/Timer. When such a timer task is canceled, already scheduled tasks run to completion. The task should check for such cancellation before running. Also, delegationTokens needs to be synchronized on all accesses.
-
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-258">YARN-258</a>.
-     Major bug reported by Ravi Prakash and fixed by Ravi Prakash (resourcemanager)<br>
-     <b>RM web page UI shows Invalid Date for start and finish times</b><br>
-     <blockquote>Whenever the number of jobs was greater than a 100, two javascript arrays were being populated. appsData and appsTableData. appsData was winning out (because it was coming out later) and so renderHadoopDate was trying to render a &lt;br title=""...&gt; string.
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-254">YARN-254</a>.
-     Major improvement reported by Sandy Ryza and fixed by Sandy Ryza (resourcemanager , scheduler)<br>
-     <b>Update fair scheduler web UI for hierarchical queues</b><br>
-     <blockquote>The fair scheduler should have a web UI similar to the capacity scheduler that shows nested queues.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-253">YARN-253</a>.
-     Critical bug reported by Tom White and fixed by Tom White (nodemanager)<br>
-     <b>Container launch may fail if no files were localized</b><br>
-     <blockquote>This can be demonstrated with DistributedShell. The containers running the shell do not have any files to localize (if there is no shell script to copy) so if they run on a different NM to the AM (which does localize files), then they will fail since the appcache directory does not exist.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-251">YARN-251</a>.
-     Major bug reported by Tom White and fixed by Tom White (resourcemanager)<br>
-     <b>Proxy URI generation fails for blank tracking URIs</b><br>
-     <blockquote>If the URI is an empty string (the default if not set), then a warning is displayed. A null URI displays no such warning. These two cases should be handled in the same way.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-230">YARN-230</a>.
-     Major sub-task reported by Bikas Saha and fixed by Bikas Saha (resourcemanager)<br>
-     <b>Make changes for RM restart phase 1</b><br>
-     <blockquote>As described in YARN-128, phase 1 of RM restart puts in place mechanisms to save application state and read them back after restart. Upon restart, the NM's are asked to reboot and the previously running AM's are restarted.
-After this is done, RM HA and work preserving restart can continue in parallel. For more details please refer to the design document in YARN-128</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-229">YARN-229</a>.
-     Major sub-task reported by Bikas Saha and fixed by Bikas Saha (resourcemanager)<br>
-     <b>Remove old code for restart</b><br>
-     <blockquote>Much of the code is dead/commented out and is not executed. Removing it will help with making and understanding new changes.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-225">YARN-225</a>.
-     Critical bug reported by Devaraj K and fixed by Devaraj K (resourcemanager)<br>
-     <b>Proxy Link in RM UI thows NPE in Secure mode</b><br>
-     <blockquote>{code:xml}
-java.lang.NullPointerException
-	at org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet.doGet(WebAppProxyServlet.java:241)
-	at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
-	at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
-	at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
-	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
-	at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
-	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
-	at org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:975)
-	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
-	at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
-	at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
-	at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
-	at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
-	at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
-	at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
-	at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
-	at org.mortbay.jetty.Server.handle(Server.java:326)
-	at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
-	at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
-	at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
-	at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
-	at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
-	at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
-	at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
-
-
-{code}</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-224">YARN-224</a>.
-     Major bug reported by Sandy Ryza and fixed by Sandy Ryza <br>
-     <b>Fair scheduler logs too many nodeUpdate INFO messages</b><br>
-     <blockquote>The RM logs are filled with an INFO message the fair scheduler logs every time it receives a nodeUpdate.  It should be taken out or demoted to debug.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-223">YARN-223</a>.
-     Critical bug reported by Radim Kolar and fixed by Radim Kolar <br>
-     <b>Change processTree interface to work better with native code</b><br>
-     <blockquote>Problem is that on every update of processTree new object is required. This is undesired when working with processTree implementation in native code.
-
-replace ProcessTree.getProcessTree() with updateProcessTree(). No new object allocation is needed and it simplify application code a bit.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-222">YARN-222</a>.
-     Major improvement reported by Sandy Ryza and fixed by Sandy Ryza (resourcemanager , scheduler)<br>
-     <b>Fair scheduler should create queue for each user by default</b><br>
-     <blockquote>In MR1 the fair scheduler's default behavior was to create a pool for each user.  The YARN fair scheduler has this capability, but it should be turned on by default, for consistency.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-219">YARN-219</a>.
-     Critical sub-task reported by Robert Joseph Evans and fixed by Robert Joseph Evans (nodemanager)<br>
-     <b>NM should aggregate logs when application finishes.</b><br>
-     <blockquote>The NM should only aggregate logs when the application finishes.  This will reduce the load on the NN, especially with respect to lease renewal.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-217">YARN-217</a>.
-     Blocker bug reported by Devaraj K and fixed by Devaraj K (resourcemanager)<br>
-     <b>yarn rmadmin commands fail in secure cluster</b><br>
-     <blockquote>All the rmadmin commands fail in secure mode with the "protocol org.apache.hadoop.yarn.server.nodemanager.api.RMAdminProtocolPB is unauthorized" message in RM logs.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-216">YARN-216</a>.
-     Major improvement reported by Todd Lipcon and fixed by Robert Joseph Evans <br>
-     <b>Remove jquery theming support</b><br>
-     <blockquote>As of today we have 9.4MB of JQuery themes in our code tree. In addition to being a waste of space, it's a highly questionable feature. I've never heard anyone complain that the Hadoop interface isn't themeable enough, and there's far more value in consistency across installations than there is in themeability. Let's rip it out.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-214">YARN-214</a>.
-     Major bug reported by Jason Lowe and fixed by Jonathan Eagles (resourcemanager)<br>
-     <b>RMContainerImpl does not handle event EXPIRE at state RUNNING</b><br>
-     <blockquote>RMContainerImpl has a race condition where a container can enter the RUNNING state just as the container expires.  This results in an invalid event transition error:
-
-{noformat}
-2012-11-11 05:31:38,954 [ResourceManager Event Processor] ERROR org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: Can't handle this event at current state
-org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: EXPIRE at RUNNING
-        at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
-        at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
-        at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
-        at org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:205)
-        at org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:44)
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApp.containerCompleted(SchedulerApp.java:203)
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.completedContainer(LeafQueue.java:1337)
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.completedContainer(CapacityScheduler.java:739)
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:659)
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:80)
-        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:340)
-        at java.lang.Thread.run(Thread.java:619)
-{noformat}
-
-EXPIRE needs to be handled (well at least ignored) in the RUNNING state to account for this race condition.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-212">YARN-212</a>.
-     Blocker bug reported by Nathan Roberts and fixed by Nathan Roberts (nodemanager)<br>
-     <b>NM state machine ignores an APPLICATION_CONTAINER_FINISHED event when it shouldn't</b><br>
-     <blockquote>The NM state machines can make the following two invalid state transitions when a speculative attempt is killed shortly after it gets started. When this happens the NM keeps the log aggregation context open for this application and therefore chews up FDs and leases on the NN, eventually running the NN out of FDs and bringing down the entire cluster.
-
-
-2012-11-07 05:36:33,774 [AsyncDispatcher event handler] WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Can't handle this event at current state
-org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: APPLICATION_CONTAINER_FINISHED at INITING
-
-2012-11-07 05:36:33,775 [AsyncDispatcher event handler] WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Can't handle this event at current state: Current: [DONE], eventType: [INIT_CONTAINER]
-org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: INIT_CONTAINER at DONE
-
-
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-206">YARN-206</a>.
-     Major bug reported by Jason Lowe and fixed by Jason Lowe (resourcemanager)<br>
-     <b>TestApplicationCleanup.testContainerCleanup occasionally fails</b><br>
-     <blockquote>testContainerCleanup is occasionally failing with the error:
-
-testContainerCleanup(org.apache.hadoop.yarn.server.resourcemanager.TestApplicationCleanup): expected:&lt;2&gt; but was:&lt;1&gt;
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-204">YARN-204</a>.
-     Major bug reported by Aleksey Gorshkov and fixed by Aleksey Gorshkov (applications)<br>
-     <b>test coverage for org.apache.hadoop.tools</b><br>
-     <blockquote>Added some tests for org.apache.hadoop.tools</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-202">YARN-202</a>.
-     Critical bug reported by Kihwal Lee and fixed by Kihwal Lee <br>
-     <b>Log Aggregation generates a storm of fsync() for namenode</b><br>
-     <blockquote>When the log aggregation is on, write to each aggregated container log causes hflush() to be called. For large clusters, this can creates a lot of fsync() calls for namenode. 
-
-We have seen 6-7x increase in the average number of fsync operations compared to 1.0.x on a large busy cluster. Over 99% of fsync ops were for log aggregation writing to tmp files.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-201">YARN-201</a>.
-     Critical bug reported by Jason Lowe and fixed by Jason Lowe (capacityscheduler)<br>
-     <b>CapacityScheduler can take a very long time to schedule containers if requests are off cluster</b><br>
-     <blockquote>When a user runs a job where one of the input files is a large file on another cluster, the job can create many splits on nodes which are unreachable for computation from the current cluster.  The off-switch delay logic in LeafQueue can cause the ResourceManager to allocate containers for the job very slowly.  In one case the job was only getting one container every 23 seconds, and the queue had plenty of spare capacity.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-189">YARN-189</a>.
-     Blocker bug reported by Thomas Graves and fixed by Thomas Graves (resourcemanager)<br>
-     <b>deadlock in RM - AMResponse object</b><br>
-     <blockquote>we ran into a deadlock in the RM.
-
-=============================
-"1128743461@qtp-1252749669-5201":
-  waiting for ownable synchronizer 0x00002aabbc87b960, (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync),
-  which is held by "AsyncDispatcher event handler"
-"AsyncDispatcher event handler":
-  waiting to lock monitor 0x00002ab0bba3a370 (object 0x00002aab3d4cd698, a org.apache.hadoop.yarn.api.records.impl.pb.AMResponsePBImpl),
-  which is held by "IPC Server handler 36 on 8030"
-"IPC Server handler 36 on 8030":
-  waiting for ownable synchronizer 0x00002aabbc87b960, (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync),
-  which is held by "AsyncDispatcher event handler"
-Java stack information for the threads listed above:
-===================================================
-"1128743461@qtp-1252749669-5201":
-        at sun.misc.Unsafe.park(Native Method)
-        - parking to wait for  &lt;0x00002aabbc87b960&gt; (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
-        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)        at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
-        at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:941)        at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1261)
-        at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:594)        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.getFinalApplicationStatus(RMAppAttemptImpl.java:2
-95)
-        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getFinalApplicationStatus(RMAppImpl.java:222)
-        at org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices.getApps(RMWebServices.java:328)
-        at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
-        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
-        at java.lang.reflect.Method.invoke(Method.java:597)
-        at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
-        at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
-        at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaM
-...
-...
-..
-  
-
-"AsyncDispatcher event handler":
-        at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.unregisterAttempt(ApplicationMasterService.java:307)
-        - waiting to lock &lt;0x00002aab3d4cd698&gt; (a org.apache.hadoop.yarn.api.records.impl.pb.AMResponsePBImpl)
-        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$BaseFinalTransition.transition(RMAppAttemptImpl.java:647)
-        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$FinalTransition.transition(RMAppAttemptImpl.java:809)
-        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$FinalTransition.transition(RMAppAttemptImpl.java:796)
-        at org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:357)
-        at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:298)
-        at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
-        at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
-        - locked &lt;0x00002aabbb673090&gt; (a org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine)
-        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:478)
-        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:81)
-        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:436)
-        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:417)
-        at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
-        at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
-        at java.lang.Thread.run(Thread.java:619)
-"IPC Server handler 36 on 8030":
-        at sun.misc.Unsafe.park(Native Method)
-        - parking to wait for  &lt;0x00002aabbc87b960&gt; (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
-        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
-        at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
-        at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
-        at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
-        at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:807)
-        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.pullJustFinishedContainers(RMAppAttemptImpl.java:437)
-        at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:285)
-        - locked &lt;0x00002aab3d4cd698&gt; (a org.apache.hadoop.yarn.api.records.impl.pb.AMResponsePBImpl)
-        at org.apache.hadoop.yarn.api.impl.pb.service.AMRMProtocolPBServiceImpl.allocate(AMRMProtocolPBServiceImpl.java:56)
-        at org.apache.hadoop.yarn.proto.AMRMProtocol$AMRMProtocolService$2.callBlockingMethod(AMRMProtocol.java:87)
-        at org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Server.call(ProtoOverHadoopRpcEngine.java:353)
-        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1528)
-        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1524)
-        at java.security.AccessController.doPrivileged(Native Method)
-        at javax.security.auth.Subject.doAs(Subject.java:396)
-        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1212)
-        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1522)
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-188">YARN-188</a>.
-     Major test reported by Aleksey Gorshkov and fixed by Aleksey Gorshkov (capacityscheduler)<br>
-     <b>Coverage fixing for CapacityScheduler</b><br>
-     <blockquote>some tests for CapacityScheduler
-YARN-188-branch-0.23.patch patch for branch 0.23
-YARN-188-branch-2.patch patch for branch 2
-YARN-188-trunk.patch  patch for trunk
-
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-187">YARN-187</a>.
-     Major new feature reported by Sandy Ryza and fixed by Sandy Ryza (scheduler)<br>
-     <b>Add hierarchical queues to the fair scheduler</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-186">YARN-186</a>.
-     Major test reported by Aleksey Gorshkov and fixed by Aleksey Gorshkov (resourcemanager , scheduler)<br>
-     <b>Coverage fixing LinuxContainerExecutor</b><br>
-     <blockquote>Added some tests for LinuxContainerExecuror  
-YARN-186-branch-0.23.patch patch for branch-0.23
-YARN-186-branch-2.patch patch for branch-2
-ARN-186-trunk.patch patch for trank
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-184">YARN-184</a>.
-     Major improvement reported by Sandy Ryza and fixed by Sandy Ryza <br>
-     <b>Remove unnecessary locking in fair scheduler, and address findbugs excludes.</b><br>
-     <blockquote>In YARN-12, locks were added to all fields of QueueManager to address findbugs.  In addition, findbugs exclusions were added in response to MAPREDUCE-4439, without a deep look at the code.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-183">YARN-183</a>.
-     Minor improvement reported by Sandy Ryza and fixed by Sandy Ryza (scheduler)<br>
-     <b>Clean up fair scheduler code</b><br>
-     <blockquote>The fair scheduler code has a bunch of minor stylistic issues.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-181">YARN-181</a>.
-     Critical bug reported by Siddharth Seth and fixed by Siddharth Seth (resourcemanager)<br>
-     <b>capacity-scheduler.xml move breaks Eclipse import</b><br>
-     <blockquote>Eclipse doesn't seem to handle "testResources" which resolve to an absolute path. YARN-140 moved capacity-scheduler.cfg a couple of levels up to the hadoop-yarn project.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-180">YARN-180</a>.
-     Critical bug reported by Thomas Graves and fixed by Arun C Murthy (capacityscheduler)<br>
-     <b>Capacity scheduler - containers that get reserved create container token to early</b><br>
-     <blockquote>The capacity scheduler has the ability to 'reserve' containers.  Unfortunately before it decides that it goes to reserved rather then assigned, the Container object is created which creates a container token that expires in roughly 10 minutes by default.  
-
-This means that by the time the NM frees up enough space on that node for the container to move to assigned the container token may have expired.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-179">YARN-179</a>.
-     Blocker bug reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli (capacityscheduler)<br>
-     <b>Bunch of test failures on trunk</b><br>
-     <blockquote>{{CapacityScheduler.setConf()}} mandates a YarnConfiguration. It doesn't need to, throughout all of YARN, components only depend on Configuration and depend on the callers to provide correct configuration.
-
-This is causing multiple tests to fail.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-178">YARN-178</a>.
-     Critical bug reported by Radim Kolar and fixed by Radim Kolar <br>
-     <b>Fix custom ProcessTree instance creation</b><br>
-     <blockquote>1. In current pluggable resourcecalculatorprocesstree is not passed root process id to custom implementation making it unusable.
-
-2. pstree do not extend Configured as it should
-
-Added constructor with pid argument with testsuite. Also added test that pstree is correctly configured.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-177">YARN-177</a>.
-     Critical bug reported by Thomas Graves and fixed by Arun C Murthy (capacityscheduler)<br>
-     <b>CapacityScheduler - adding a queue while the RM is running has wacky results</b><br>
-     <blockquote>Adding a queue to the capacity scheduler while the RM is running and then running a job in the queue added results in very strange behavior.  The cluster Total Memory can either decrease or increase.  We had a cluster where total memory decreased to almost 1/6th the capacity. Running on a small test cluster resulted in the capacity going up by simply adding a queue and running wordcount.  
-
-Looking at the RM logs, used memory can go negative but other logs show the number positive:
-
-
-2012-10-21 22:56:44,796 [ResourceManager Event Processor] INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.0375 absoluteUsedCapacity=0.0375 used=memory: 7680 cluster=memory: 204800
-
-2012-10-21 22:56:45,831 [ResourceManager Event Processor] INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=-0.0225 absoluteUsedCapacity=-0.0225 used=memory: -4608 cluster=memory: 204800
-
-  </blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-170">YARN-170</a>.
-     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (nodemanager)<br>
-     <b>NodeManager stop() gets called twice on shutdown</b><br>
-     <blockquote>The stop method in the NodeManager gets called twice when the NodeManager is shut down via the shutdown hook.
-
-The first is the stop that gets called directly by the shutdown hook.  The second occurs when the NodeStatusUpdaterImpl is stopped.  The NodeManager responds to the NodeStatusUpdaterImpl stop stateChanged event by stopping itself.  This is so that NodeStatusUpdaterImpl can notify the NodeManager to stop, by stopping itself in response to a request from the ResourceManager
-
-This could be avoided if the NodeStatusUpdaterImpl were to stop the NodeManager by calling its stop method directly.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-169">YARN-169</a>.
-     Minor improvement reported by Anthony Rojas and fixed by Anthony Rojas (nodemanager)<br>
-     <b>Update log4j.appender.EventCounter to use org.apache.hadoop.log.metrics.EventCounter</b><br>
-     <blockquote>We should update the log4j.appender.EventCounter in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/container-log4j.properties to use *org.apache.hadoop.log.metrics.EventCounter* rather than *org.apache.hadoop.metrics.jvm.EventCounter* to avoid triggering the following warning:
-
-{code}WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files{code}</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-166">YARN-166</a>.
-     Major bug reported by Thomas Graves and fixed by Thomas Graves (capacityscheduler)<br>
-     <b>capacity scheduler doesn't allow capacity &lt; 1.0</b><br>
-     <blockquote>1.x supports queue capacity &lt; 1, but in 0.23 the capacity scheduler doesn't.  This is an issue for us since we have a large cluster running 1.x that currently has a queue with capacity 0.5%.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-165">YARN-165</a>.
-     Blocker improvement reported by Jason Lowe and fixed by Jason Lowe (resourcemanager)<br>
-     <b>RM should point tracking URL to RM web page for app when AM fails</b><br>
-     <blockquote>Currently when an ApplicationMaster fails the ResourceManager is updating the tracking URL to an empty string, see RMAppAttemptImpl.ContainerFinishedTransition.  Unfortunately when the client attempts to follow the proxy URL it results in a web page showing an HTTP 500 error and an ugly backtrace because "http://" isn't a very helpful tracking URL.
-
-It would be much more helpful if the proxy URL redirected to the RM webapp page for the specific application.  That page shows the various AM attempts and pointers to their logs which will be useful for debugging the problems that caused the AM attempts to fail.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-163">YARN-163</a>.
-     Major bug reported by Jason Lowe and fixed by Jason Lowe (nodemanager)<br>
-     <b>Retrieving container log via NM webapp can hang with multibyte characters in log</b><br>
-     <blockquote>ContainerLogsBlock.printLogs currently assumes that skipping N bytes in the log file is the same as skipping N characters, but that is not true when the log contains multibyte characters.  This can cause the loop that skips a portion of the log to try to skip past the end of the file and loop forever (or until Jetty kills the worker thread).</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-161">YARN-161</a>.
-     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (api)<br>
-     <b>Yarn Common has multiple compiler warnings for unchecked operations</b><br>
-     <blockquote>The warnings are in classes StateMachineFactory, RecordFactoryProvider, RpcFactoryProvider, and YarnRemoteExceptionFactoryProvider.  OpenJDK 1.6.0_24 actually treats these as compilation errors, causing the build to fail.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-159">YARN-159</a>.
-     Major bug reported by Thomas Graves and fixed by Thomas Graves (resourcemanager)<br>
-     <b>RM web ui applications page should be sorted to display last app first </b><br>
-     <blockquote>RM web ui applications page should be sorted to display last app first.
-
-It currently sorts with smallest application id first, which is the first apps that were submitted.  After you have one page worth of apps its much more useful for it to sort such that the biggest appid (last submitted app) shows up first.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-151">YARN-151</a>.
-     Major bug reported by Robert Joseph Evans and fixed by Ravi Prakash <br>
-     <b>Browser thinks RM main page JS is taking too long</b><br>
-     <blockquote>The main RM page with the default settings of 10,000 applications can cause browsers to think that the JS on the page is stuck and ask you if you want to kill it.  This is a big usability problem.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-150">YARN-150</a>.
-     Major bug reported by Bikas Saha and fixed by Bikas Saha <br>
-     <b>AppRejectedTransition does not unregister app from master service and scheduler</b><br>
-     <blockquote>AttemptStartedTransition() adds the app to the ApplicationMasterService and scheduler. when the scheduler rejects the app then AppRejectedTransition() forgets to unregister it from the ApplicationMasterService.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-146">YARN-146</a>.
-     Major new feature reported by Sandy Ryza and fixed by Sandy Ryza (resourcemanager)<br>
-     <b>Add unit tests for computing fair share in the fair scheduler</b><br>
-     <blockquote>MR1 had TestComputeFairShares.  This should go into the YARN fair scheduler.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-145">YARN-145</a>.
-     Major new feature reported by Sandy Ryza and fixed by Sandy Ryza (resourcemanager)<br>
-     <b>Add a Web UI to the fair share scheduler</b><br>
-     <blockquote>The fair scheduler had a UI in MR1.  Port the capacity scheduler web UI and modify appropriately for the fair share scheduler.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-140">YARN-140</a>.
-     Major bug reported by Ahmed Radwan and fixed by Ahmed Radwan (capacityscheduler)<br>
-     <b>Add capacity-scheduler-default.xml to provide a default set of configurations for the capacity scheduler.</b><br>
-     <blockquote>When setting up the capacity scheduler users are faced with problems like:
-
-{code}
-FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting ResourceManager
-java.lang.IllegalArgumentException: Illegal capacity of -1 for queue root
-{code}
-
-Which basically arises from missing basic configurations, which in many cases, there is no need to explicitly provide, and a default configuration will be sufficient. For example, to address the error above, the user need to add a capacity of 100 to the root queue.
-
-So, we need to add a capacity-scheduler-default.xml, this will be helpful to provide the basic set of default configurations required to run the capacity scheduler. The user can still override existing configurations or provide new ones in capacity-scheduler.xml. This is similar to *-default.xml vs *-site.xml for yarn, core, mapred, hdfs, etc.
-
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-139">YARN-139</a>.
-     Major bug reported by Nathan Roberts and fixed by Vinod Kumar Vavilapalli (api)<br>
-     <b>Interrupted Exception within AsyncDispatcher leads to user confusion</b><br>
-     <blockquote>Successful applications tend to get InterruptedExceptions during shutdown. The exception is harmless but it leads to lots of user confusion and therefore could be cleaned up.
-
-
-2012-09-28 14:50:12,477 WARN [AsyncDispatcher event handler] org.apache.hadoop.yarn.event.AsyncDispatcher: Interrupted Exception while stopping
-java.lang.InterruptedException
-	at java.lang.Object.wait(Native Method)
-	at java.lang.Thread.join(Thread.java:1143)
-	at java.lang.Thread.join(Thread.java:1196)
-	at org.apache.hadoop.yarn.event.AsyncDispatcher.stop(AsyncDispatcher.java:105)
-	at org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
-	at org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
-	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler.handle(MRAppMaster.java:437)
-	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler.handle(MRAppMaster.java:402)
-	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
-	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
-	at java.lang.Thread.run(Thread.java:619)
-2012-09-28 14:50:12,477 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.service.AbstractService: Service:Dispatcher is stopped.
-2012-09-28 14:50:12,477 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.service.AbstractService: Service:org.apache.hadoop.mapreduce.v2.app.MRAppMaster is stopped.
-2012-09-28 14:50:12,477 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Exiting MR AppMaster..GoodBye</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-136">YARN-136</a>.
-     Major bug reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli (resourcemanager)<br>
-     <b>Make ClientTokenSecretManager part of RMContext</b><br>
-     <blockquote>Helps to add it to the context instead of passing it all around as an extra parameter.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-135">YARN-135</a>.
-     Major sub-task reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli (resourcemanager)<br>
-     <b>ClientTokens should be per app-attempt and be unregistered on App-finish.</b><br>
-     <blockquote>Two issues:
- - ClientTokens are per app-attempt but are created per app.
- - Apps don't get unregistered from RMClientTokenSecretManager.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-134">YARN-134</a>.
-     Major sub-task reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli <br>
-     <b>ClientToAMSecretManager creates keys without checking for validity of the appID</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-133">YARN-133</a>.
-     Major bug reported by Thomas Graves and fixed by Ravi Prakash (resourcemanager)<br>
-     <b>update web services docs for RM clusterMetrics</b><br>
-     <blockquote>Looks like jira https://issues.apache.org/jira/browse/MAPREDUCE-3747 added in more RM cluster metrics but the docs didn't get updated: http://hadoop.apache.org/docs/r0.23.3/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Metrics_API
-
-
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-131">YARN-131</a>.
-     Major bug reported by Ahmed Radwan and fixed by Ahmed Radwan (capacityscheduler)<br>
-     <b>Incorrect ACL properties in capacity scheduler documentation</b><br>
-     <blockquote>The CapacityScheduler apt file incorrectly specifies the property names controlling acls for application submission and queue administration.
-
-{{yarn.scheduler.capacity.root.&lt;queue-path&gt;.acl_submit_jobs}}
-should be
-{{yarn.scheduler.capacity.root.&lt;queue-path&gt;.acl_submit_applications}}
-
-{{yarn.scheduler.capacity.root.&lt;queue-path&gt;.acl_administer_jobs}}
-should be
-{{yarn.scheduler.capacity.root.&lt;queue-path&gt;.acl_administer_queue}}
-
-Uploading a patch momentarily.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-129">YARN-129</a>.
-     Major improvement reported by Tom White and fixed by Tom White (client)<br>
-     <b>Simplify classpath construction for mini YARN tests</b><br>
-     <blockquote>The test classpath includes a special file called 'mrapp-generated-classpath' (or similar in distributed shell) that is constructed at build time, and whose contents are a classpath with all the dependencies needed to run the tests. When the classpath for a container (e.g. the AM) is constructed the contents of mrapp-generated-classpath is read and added to the classpath, and the file itself is then added to the classpath so that later when the AM constructs a classpath for a task container it can propagate the test classpath correctly.
-
-This mechanism can be drastically simplified by propagating the system classpath of the current JVM (read from the java.class.path property) to a launched JVM, but only if running in the context of the mini YARN cluster. Any tests that use the mini YARN cluster will automatically work with this change. Although any that explicitly deal with mrapp-generated-classpath can be simplified.
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-127">YARN-127</a>.
-     Major bug reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli <br>
-     <b>Move RMAdmin tool to the client package</b><br>
-     <blockquote>It belongs to the client package and not the RM clearly.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-116">YARN-116</a>.
-     Major bug reported by xieguiming and fixed by xieguiming (resourcemanager)<br>
-     <b>RM is missing ability to add include/exclude files without a restart</b><br>
-     <blockquote>The "yarn.resourcemanager.nodes.include-path" default value is "", if we need to add an include file, we must currently restart the RM. 
-
-I suggest that for adding an include or exclude file, there should be no need to restart the RM. We may only execute the refresh command. The HDFS NameNode already has this ability.
-
-Fix is to the modify HostsFileReader class instances:
-
-From:
-{code}
-public HostsFileReader(String inFile, 
-                         String exFile)
-{code}
-To:
-{code}
- public HostsFileReader(Configuration conf, 
-                         String NODES_INCLUDE_FILE_PATH,String DEFAULT_NODES_INCLUDE_FILE_PATH,
-                        String NODES_EXCLUDE_FILE_PATH,String DEFAULT_NODES_EXCLUDE_FILE_PATH)
-{code}
-
-And thus, we can read the config file dynamically when a {{refreshNodes}} is invoked and therefore have no need to restart the ResourceManager.
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-103">YARN-103</a>.
-     Major improvement reported by Bikas Saha and fixed by Bikas Saha <br>
-     <b>Add a yarn AM - RM client module</b><br>
-     <blockquote>Add a basic client wrapper library to the AM RM protocol in order to prevent proliferation of code being duplicated everywhere. Provide helper functions to perform reverse mapping of container requests to RM allocation resource request table format.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-102">YARN-102</a>.
-     Trivial bug reported by Devaraj K and fixed by Devaraj K (resourcemanager)<br>
-     <b>Move the apache licence header to the top of the file in MemStore.java</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-94">YARN-94</a>.
-     Major bug reported by Vinod Kumar Vavilapalli and fixed by Hitesh Shah (applications/distributed-shell)<br>
-     <b>DistributedShell jar should point to Client as the main class by default</b><br>
-     <blockquote>Today, it says so..
-{code}
-$ $YARN_HOME/bin/yarn jar $YARN_HOME/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-$VERSION.jar
-RunJar jarFile [mainClass] args...
-{code}</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-93">YARN-93</a>.
-     Major bug reported by Jason Lowe and fixed by Jason Lowe (resourcemanager)<br>
-     <b>Diagnostics missing from applications that have finished but failed</b><br>
-     <blockquote>If an application finishes in the YARN sense but fails in the app framework sense (e.g.: a failed MapReduce job) then diagnostics are missing from the RM web page for the application.  The RM should be reporting diagnostic messages even for successful YARN applications.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-82">YARN-82</a>.
-     Minor bug reported by Andy Isaacson and fixed by Hemanth Yamijala (nodemanager)<br>
-     <b>YARN local-dirs defaults to /tmp/nm-local-dir</b><br>
-     <blockquote>{{yarn.nodemanager.local-dirs}} defaults to {{/tmp/nm-local-dir}}.  It should be {hadoop.tmp.dir}/nm-local-dir or similar.  Among other problems, this can prevent multiple test clusters from starting on the same machine.
-
-Thanks to Hemanth Yamijala for reporting this issue.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-78">YARN-78</a>.
-     Major bug reported by Bikas Saha and fixed by Bikas Saha (applications)<br>
-     <b>Change UnmanagedAMLauncher to use YarnClientImpl</b><br>
-     <blockquote>YARN-29 added a common client impl to talk to the RM. Use that in the UnmanagedAMLauncher.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-72">YARN-72</a>.
-     Major bug reported by Hitesh Shah and fixed by Sandy Ryza (nodemanager)<br>
-     <b>NM should handle cleaning up containers when it shuts down</b><br>
-     <blockquote>Ideally, the NM should wait for a limited amount of time when it gets a shutdown signal for existing containers to complete and kill the containers ( if we pick an aggressive approach ) after this time interval. 
-
-For NMs which come up after an unclean shutdown, the NM should look through its directories for existing container.pids and try and kill an existing containers matching the pids found. </blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-57">YARN-57</a>.
-     Major improvement reported by Radim Kolar and fixed by Radim Kolar (nodemanager)<br>
-     <b>Plugable process tree</b><br>
-     <blockquote>Trunk version of Pluggable process tree. Work based on MAPREDUCE-4204</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-50">YARN-50</a>.
-     Blocker sub-task reported by Siddharth Seth and fixed by Siddharth Seth <br>
-     <b>Implement renewal / cancellation of Delegation Tokens</b><br>
-     <blockquote>Currently, delegation tokens issues by the RM and History server cannot be renewed or cancelled. This needs to be implemented.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-43">YARN-43</a>.
-     Major bug reported by Thomas Graves and fixed by Thomas Graves <br>
-     <b>TestResourceTrackerService fail intermittently on jdk7</b><br>
-     <blockquote>Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.73 sec &lt;&lt;&lt; FAILURE!
-testDecommissionWithIncludeHosts(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService)  Time elapsed: 0.086 sec  &lt;&lt;&lt; FAILURE!
-junit.framework.AssertionFailedError: expected:&lt;0&gt; but was:&lt;1&gt;        at junit.framework.Assert.fail(Assert.java:47)
-        at junit.framework.Assert.failNotEquals(Assert.java:283)
-        at junit.framework.Assert.assertEquals(Assert.java:64)
-        at junit.framework.Assert.assertEquals(Assert.java:195)
-        at junit.framework.Assert.assertEquals(Assert.java:201)
-        at org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService.testDecommissionWithIncludeHosts(TestResourceTrackerService.java:90)</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-40">YARN-40</a>.
-     Major bug reported by Devaraj K and fixed by Devaraj K (client)<br>
-     <b>Provide support for missing yarn commands</b><br>
-     <blockquote>1. status &lt;app-id&gt;
-2. kill &lt;app-id&gt; (Already issue present with Id : MAPREDUCE-3793)
-3. list-apps [all]
-4. nodes-report</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-33">YARN-33</a>.
-     Major bug reported by Mayank Bansal and fixed by Mayank Bansal (nodemanager)<br>
-     <b>LocalDirsHandler should validate the configured local and log dirs</b><br>
-     <blockquote>WHen yarn.nodemanager.log-dirs is with file:// URI then startup of node manager creates the directory like file:// under CWD.
-
-WHich should not be there.
-
-Thanks,
-Mayank </blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-32">YARN-32</a>.
-     Major bug reported by Thomas Graves and fixed by Vinod Kumar Vavilapalli <br>
-     <b>TestApplicationTokens fails intermintently on jdk7</b><br>
-     <blockquote>TestApplicationsTokens fails intermintently on jdk7. </blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-30">YARN-30</a>.
-     Major bug reported by Thomas Graves and fixed by Thomas Graves <br>
-     <b>TestNMWebServicesApps, TestRMWebServicesApps and TestRMWebServicesNodes fail on jdk7</b><br>
-     <blockquote>It looks like the string changed from "const class" to "constant". 
-
-
-Tests run: 19, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 6.786 sec &lt;&lt;&lt; FAILURE!
-testNodeAppsStateInvalid(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServicesApps)  Time elapsed: 0.248 sec  &lt;&lt;&lt; FAILURE!
-java.lang.AssertionError: exception message doesn't match, got: No enum constant org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationState.FOO_STATE expected: No enum const class org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationState.FOO_STATE</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-28">YARN-28</a>.
-     Major bug reported by Thomas Graves and fixed by Thomas Graves <br>
-     <b>TestCompositeService fails on jdk7</b><br>
-     <blockquote>test TestCompositeService fails when run with jdk7.
-
-It appears it expects test testCallSequence to be called first and the sequence numbers to start at 0. On jdk7 its not being called first and sequence number has already been incremented.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-23">YARN-23</a>.
-     Major improvement reported by Karthik Kambatla and fixed by Karthik Kambatla (scheduler)<br>
-     <b>FairScheduler: FSQueueSchedulable#updateDemand() - potential redundant aggregation</b><br>
-     <blockquote>In FS, FSQueueSchedulable#updateDemand() limits the demand to maxTasks only after iterating though all the pools and computing the final demand. 
-
-By checking if the demand has reached maxTasks in every iteration, we can avoid redundant work, at the expense of one condition check every iteration.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-3">YARN-3</a>.
-     Major sub-task reported by Arun C Murthy and fixed by Andrew Ferguson <br>
-     <b>Add support for CPU isolation/monitoring of containers</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/YARN-2">YARN-2</a>.
-     Major new feature reported by Arun C Murthy and fixed by Arun C Murthy (capacityscheduler , scheduler)<br>
-     <b>Enhance CS to schedule accounting for both memory and cpu cores</b><br>
-     <blockquote>With YARN being a general purpose system, it would be useful for several applications (MPI et al) to specify not just memory but also CPU (cores) for their resource requirements. Thus, it would be useful to the CapacityScheduler to account for both.</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4977">MAPREDUCE-4977</a>.
-     Major improvement reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (documentation)<br>
-     <b>Documentation for pluggable shuffle and pluggable sort</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4971">MAPREDUCE-4971</a>.
-     Minor improvement reported by Arun C Murthy and fixed by Arun C Murthy <br>
-     <b>Minor extensibility enhancements </b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4969">MAPREDUCE-4969</a>.
-     Major bug reported by Arpit Agarwal and fixed by Arpit Agarwal (test)<br>
-     <b>TestKeyValueTextInputFormat test fails with Open JDK 7</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4953">MAPREDUCE-4953</a>.
-     Major bug reported by Andy Isaacson and fixed by Andy Isaacson (pipes)<br>
-     <b>HadoopPipes misuses fprintf</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4949">MAPREDUCE-4949</a>.
-     Minor improvement reported by Sandy Ryza and fixed by Sandy Ryza (examples)<br>
-     <b>Enable multiple pi jobs to run in parallel</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4948">MAPREDUCE-4948</a>.
-     Critical bug reported by Junping Du and fixed by Junping Du (client)<br>
-     <b>TestYARNRunner.testHistoryServerToken failed on trunk</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4946">MAPREDUCE-4946</a>.
-     Critical bug reported by Jason Lowe and fixed by Jason Lowe (mr-am)<br>
-     <b>Type conversion of map completion events leads to performance problems with large jobs</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4936">MAPREDUCE-4936</a>.
-     Critical bug reported by Daryn Sharp and fixed by Arun C Murthy (mrv2)<br>
-     <b>JobImpl uber checks for cpu are wrong</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4934">MAPREDUCE-4934</a>.
-     Critical bug reported by Thomas Graves and fixed by Thomas Graves (build)<br>
-     <b>Maven RAT plugin is not checking all source files</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4928">MAPREDUCE-4928</a>.
-     Major improvement reported by Suresh Srinivas and fixed by Suresh Srinivas (applicationmaster , security)<br>
-     <b>Use token request messages defined in hadoop common </b><br>
-     <blockquote>Protobuf message GetDelegationTokenRequestProto field renewer is made requried from optional. This change is not wire compatible with the older releases.
-</blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4925">MAPREDUCE-4925</a>.
-     Major bug reported by Karthik Kambatla and fixed by Karthik Kambatla (examples)<br>
-     <b>The pentomino option parser may be buggy</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4924">MAPREDUCE-4924</a>.
-     Trivial bug reported by Robert Kanter and fixed by Robert Kanter (mrv1)<br>
-     <b>flakey test: org.apache.hadoop.mapred.TestClusterMRNotification.testMR</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4923">MAPREDUCE-4923</a>.
-     Minor bug reported by Sandy Ryza and fixed by Sandy Ryza (mrv1 , mrv2 , task)<br>
-     <b>Add toString method to TaggedInputSplit</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4921">MAPREDUCE-4921</a>.
-     Blocker bug reported by Daryn Sharp and fixed by Daryn Sharp (client)<br>
-     <b>JobClient should acquire HS token with RM principal</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4920">MAPREDUCE-4920</a>.
-     Major bug reported by Vinod Kumar Vavilapalli and fixed by Suresh Srinivas <br>
-     <b>Use security token protobuf definition from hadoop common</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4913">MAPREDUCE-4913</a>.
-     Major bug reported by Jason Lowe and fixed by Jason Lowe (mr-am)<br>
-     <b>TestMRAppMaster#testMRAppMasterMissingStaging occasionally exits</b><br>
-     <blockquote></blockquote></li>
-<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4907">MAPREDUCE-4907</a>.

[... 12346 lines stripped ...]