You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by ac...@apache.org on 2013/09/17 07:24:20 UTC

svn commit: r1523893 [2/2] - in /hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common: CHANGES.txt src/main/docs/releasenotes.html

Modified: hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html?rev=1523893&r1=1523892&r2=1523893&view=diff
==============================================================================
--- hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html (original)
+++ hadoop/common/branches/branch-2.1-beta/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html Tue Sep 17 05:24:19 2013
@@ -1,4 +1,1132 @@
 <META http-equiv="Content-Type" content="text/html; charset=UTF-8">
+<title>Hadoop  2.1.1-beta Release Notes</title>
+<STYLE type="text/css">
+	H1 {font-family: sans-serif}
+	H2 {font-family: sans-serif; margin-left: 7mm}
+	TABLE {margin-left: 7mm}
+</STYLE>
+</head>
+<body>
+<h1>Hadoop  2.1.1-beta Release Notes</h1>
+These release notes include new developer and user-facing incompatibilities, features, and major improvements. 
+<a name="changes"/>
+<h2>Changes since Hadoop 2.1.0-beta</h2>
+<ul>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1194">YARN-1194</a>.
+     Minor bug reported by Roman Shaposhnik and fixed by Roman Shaposhnik (nodemanager)<br>
+     <b>TestContainerLogsPage fails with native builds</b><br>
+     <blockquote>Running TestContainerLogsPage on trunk while Native IO is enabled makes it fail</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1189">YARN-1189</a>.
+     Blocker bug reported by Jason Lowe and fixed by Omkar Vinit Joshi <br>
+     <b>NMTokenSecretManagerInNM is not being told when applications have finished </b><br>
+     <blockquote>The {{appFinished}} method is not being called when applications have finished.  This causes a couple of leaks as {{oldMasterKeys}} and {{appToAppAttemptMap}} are never being pruned.
+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1184">YARN-1184</a>.
+     Major bug reported by J.Andreina and fixed by Chris Douglas (capacityscheduler , resourcemanager)<br>
+     <b>ClassCastException is thrown during preemption When a huge job is submitted to a queue B whose resources is used by a job in queueA</b><br>
+     <blockquote>preemption is enabled.
+Queue = a,b
+a capacity = 30%
+b capacity = 70%
+
+Step 1: Assign a big job to queue a ( so that job_a will utilize some resources from queue b)
+Step 2: Assigne a big job to queue b.
+
+Following exception is thrown at Resource Manager
+{noformat}
+2013-09-12 10:42:32,535 ERROR [SchedulingMonitor (ProportionalCapacityPreemptionPolicy)] yarn.YarnUncaughtExceptionHandler (YarnUncaughtExceptionHandler.java:uncaughtException(68)) - Thread Thread[SchedulingMonitor (ProportionalCapacityPreemptionPolicy),5,main] threw an Exception.
+java.lang.ClassCastException: java.util.Collections$UnmodifiableSet cannot be cast to java.util.NavigableSet
+	at org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.getContainersToPreempt(ProportionalCapacityPreemptionPolicy.java:403)
+	at org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.containerBasedPreemptOrKill(ProportionalCapacityPreemptionPolicy.java:202)
+	at org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.editSchedule(ProportionalCapacityPreemptionPolicy.java:173)
+	at org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.invokePolicy(SchedulingMonitor.java:72)
+	at org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor$PreemptionChecker.run(SchedulingMonitor.java:82)
+	at java.lang.Thread.run(Thread.java:662)
+
+{noformat}</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1176">YARN-1176</a>.
+     Critical bug reported by Thomas Graves and fixed by Jonathan Eagles (resourcemanager)<br>
+     <b>RM web services ClusterMetricsInfo total nodes doesn't include unhealthy nodes</b><br>
+     <blockquote>In the web services api for the cluster/metrics, the totalNodes reported doesn't include the unhealthy nodes.
+
+this.totalNodes = activeNodes + lostNodes + decommissionedNodes
+	        + rebootedNodes;</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1170">YARN-1170</a>.
+     Blocker bug reported by Arun C Murthy and fixed by Binglin Chang <br>
+     <b>yarn proto definitions should specify package as 'hadoop.yarn'</b><br>
+     <blockquote>yarn proto definitions should specify package as 'hadoop.yarn' similar to protos with 'hadoop.common' &amp; 'hadoop.hdfs' in Common &amp; HDFS respectively.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1152">YARN-1152</a>.
+     Blocker bug reported by Jason Lowe and fixed by Jason Lowe (resourcemanager)<br>
+     <b>Invalid key to HMAC computation error when getting application report for completed app attempt</b><br>
+     <blockquote>On a secure cluster, an invalid key to HMAC error is thrown when trying to get an application report for an application with an attempt that has unregistered.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1144">YARN-1144</a>.
+     Critical bug reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (resourcemanager)<br>
+     <b>Unmanaged AMs registering a tracking URI should not be proxy-fied</b><br>
+     <blockquote>Unmanaged AMs do not run in the cluster, their tracking URL should not be proxy-fied.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1137">YARN-1137</a>.
+     Major improvement reported by Alejandro Abdelnur and fixed by Roman Shaposhnik (nodemanager)<br>
+     <b>Add support whitelist for system users to Yarn container-executor.c</b><br>
+     <blockquote>Currently container-executor.c has a banned set of users (mapred, hdfs &amp; bin) and configurable min.user.id (defaulting to 1000).
+
+This presents a problem for systems that run as system users (below 1000) if these systems want to start containers.
+
+Systems like Impala fit in this category. A (local) 'impala' system user is created when installing Impala on the nodes. 
+
+Note that the same thing happens when installing system like HDFS, Yarn, Oozie, from packages (Bigtop); local system users are created.
+
+For Impala to be able to run containers in a secure cluster, the 'impala' system user must whitelisted. 
+
+For this, adding a configuration 'allowed.system.users' option in the container-executor.cfg and the logic in container-executor.c would allow the usernames in that list.
+
+Because system users are not guaranteed to have the same UID in different machines, the 'allowed.system.users' property should use usernames and not UIDs.
+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1124">YARN-1124</a>.
+     Blocker bug reported by Omkar Vinit Joshi and fixed by Xuan Gong <br>
+     <b>By default yarn application -list should display all the applications in a state other than FINISHED / FAILED</b><br>
+     <blockquote>Today we are just listing application in RUNNING state by default for "yarn application -list". Instead we should show all the applications which are either submitted/accepted/running.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1120">YARN-1120</a>.
+     Minor bug reported by Chuan Liu and fixed by Chuan Liu <br>
+     <b>Make ApplicationConstants.Environment.USER definition OS neutral</b><br>
+     <blockquote>In YARN-557, we added some code to make {{ApplicationConstants.Environment.USER}} has OS-specific definition in order to fix the unit test TestUnmanagedAMLauncher. In YARN-571, the relevant test code was corrected. In YARN-602, we actually will explicitly set the environment variables for the child containers. With these changes, I think we can revert the YARN-557 change to make {{ApplicationConstants.Environment.USER}} OS neutral. The main benefit is that we can use the same method over the Enum constants. This should also fix the TestContainerLaunch#testContainerEnvVariables failure on Windows. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1117">YARN-1117</a>.
+     Major improvement reported by Tassapol Athiapinya and fixed by Xuan Gong (client)<br>
+     <b>Improve help message for $ yarn applications and $yarn node</b><br>
+     <blockquote>There is standardization of help message in YARN-1080. It is nice to have similar changes for $ yarn appications and yarn node</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1116">YARN-1116</a>.
+     Major sub-task reported by Jian He and fixed by Jian He (resourcemanager)<br>
+     <b>Populate AMRMTokens back to AMRMTokenSecretManager after RM restarts</b><br>
+     <blockquote>The AMRMTokens are now only saved in RMStateStore and not populated back to AMRMTokenSecretManager after RM restarts. This is more needed now since AMRMToken also becomes used in non-secure env.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1107">YARN-1107</a>.
+     Blocker bug reported by Arpit Gupta and fixed by Omkar Vinit Joshi (resourcemanager)<br>
+     <b>Job submitted with Delegation token in secured environment causes RM to fail during RM restart</b><br>
+     <blockquote>If secure RM with recovery enabled is restarted while oozie jobs are running rm fails to come up.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1101">YARN-1101</a>.
+     Major bug reported by Robert Parker and fixed by Robert Parker (resourcemanager)<br>
+     <b>Active nodes can be decremented below 0</b><br>
+     <blockquote>The issue is in RMNodeImpl where both RUNNING and UNHEALTHY states that transition to a deactive state (LOST, DECOMMISSIONED, REBOOTED) use the same DeactivateNodeTransition class.  The DeactivateNodeTransition class naturally decrements the active node, however the in cases where the node has transition to UNHEALTHY the active count has already been decremented.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1094">YARN-1094</a>.
+     Blocker bug reported by Yesha Vora and fixed by Vinod Kumar Vavilapalli <br>
+     <b>RM restart throws Null pointer Exception in Secure Env</b><br>
+     <blockquote>Enable rmrestart feature And restart Resorce Manager while a job is running.
+
+Resorce Manager fails to start with below error
+
+2013-08-23 17:57:40,705 INFO  resourcemanager.RMAppManager (RMAppManager.java:recover(370)) - Recovering application application_1377280618693_0001
+2013-08-23 17:57:40,763 ERROR resourcemanager.ResourceManager (ResourceManager.java:serviceStart(617)) - Failed to load/recover state
+java.lang.NullPointerException
+        at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.setTimerForTokenRenewal(DelegationTokenRenewer.java:371)
+        at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.addApplication(DelegationTokenRenewer.java:307)
+        at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:291)
+        at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:371)
+        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:819)
+        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:613)
+        at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
+        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:832)
+2013-08-23 17:57:40,766 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
+                                                                                                    
+
+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1093">YARN-1093</a>.
+     Major bug reported by Wing Yew Poon and fixed by  (documentation)<br>
+     <b>Corrections to Fair Scheduler documentation</b><br>
+     <blockquote>The fair scheduler is still evolving, but the current documentation contains some inaccuracies.
+
+
+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1085">YARN-1085</a>.
+     Blocker task reported by Jaimin D Jetly and fixed by Omkar Vinit Joshi (nodemanager , resourcemanager)<br>
+     <b>Yarn and MRv2 should do HTTP client authentication in kerberos setup.</b><br>
+     <blockquote>In kerberos setup it's expected for a http client to authenticate to kerberos before allowing user to browse any information.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1083">YARN-1083</a>.
+     Major bug reported by Yesha Vora and fixed by Zhijie Shen (resourcemanager)<br>
+     <b>ResourceManager should fail when yarn.nm.liveness-monitor.expiry-interval-ms is set less than heartbeat interval</b><br>
+     <blockquote>if 'yarn.nm.liveness-monitor.expiry-interval-ms' is set to less than heartbeat iterval, all the node managers will be added in 'Lost Nodes'
+
+Instead, Resource Manager should validate these property and It should fail to start if combination of such property is invalid.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1082">YARN-1082</a>.
+     Blocker bug reported by Arpit Gupta and fixed by Vinod Kumar Vavilapalli (resourcemanager)<br>
+     <b>Secure RM with recovery enabled and rm state store on hdfs fails with gss exception</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1081">YARN-1081</a>.
+     Minor improvement reported by Tassapol Athiapinya and fixed by Akira AJISAKA (client)<br>
+     <b>Minor improvement to output header for $ yarn node -list</b><br>
+     <blockquote>Output of $ yarn node -list shows number of running containers at each node. I found a case when new user of YARN thinks that this is container ID, use it later in other YARN commands and find an error due to misunderstanding.
+
+{code:title=current output}
+2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list
+2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1
+2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id	Node-State	Node-Http-Address	Running-Containers
+2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454	   RUNNING	myhost:50060	   2
+{code}
+
+{code:title=proposed output}
+2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list
+2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1
+2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id	Node-State	Node-Http-Address	Number-of-Running-Containers
+2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454	   RUNNING	myhost:50060	   2
+{code}</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1080">YARN-1080</a>.
+     Major improvement reported by Tassapol Athiapinya and fixed by Xuan Gong (client)<br>
+     <b>Improve help message for $ yarn logs</b><br>
+     <blockquote>There are 2 parts I am proposing in this jira. They can be fixed together in one patch.
+
+1. Standardize help message for required parameter of $ yarn logs
+YARN CLI has a command "logs" ($ yarn logs). The command always requires a parameter of "-applicationId &lt;arg&gt;". However, help message of the command does not make it clear. It lists -applicationId as optional parameter. If I don't set it, YARN CLI will complain this is missing. It is better to use standard required notation used in other Linux command for help message. Any user familiar to the command can understand that this parameter is needed more easily.
+
+{code:title=current help message}
+-bash-4.1$ yarn logs
+usage: general options are:
+ -applicationId &lt;arg&gt;   ApplicationId (required)
+ -appOwner &lt;arg&gt;        AppOwner (assumed to be current user if not
+                        specified)
+ -containerId &lt;arg&gt;     ContainerId (must be specified if node address is
+                        specified)
+ -nodeAddress &lt;arg&gt;     NodeAddress in the format nodename:port (must be
+                        specified if container id is specified)
+{code}
+
+{code:title=proposed help message}
+-bash-4.1$ yarn logs
+usage: yarn logs -applicationId &lt;application ID&gt; [OPTIONS]
+general options are:
+ -appOwner &lt;arg&gt;        AppOwner (assumed to be current user if not
+                        specified)
+ -containerId &lt;arg&gt;     ContainerId (must be specified if node address is
+                        specified)
+ -nodeAddress &lt;arg&gt;     NodeAddress in the format nodename:port (must be
+                        specified if container id is specified)
+{code}
+
+2. Add description for help command. As far as I know, a user cannot get logs for running job. Since I spent some time trying to get logs of running applications, it should be nice to say this in command description.
+{code:title=proposed help}
+Retrieve logs for completed/killed YARN application
+usage: general options are...
+{code}
+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1078">YARN-1078</a>.
+     Minor bug reported by Chuan Liu and fixed by Chuan Liu <br>
+     <b>TestNodeManagerResync, TestNodeManagerShutdown, and TestNodeStatusUpdater fail on Windows</b><br>
+     <blockquote>The three unit tests fail on Windows due to host name resolution differences on Windows, i.e. 127.0.0.1 does not resolve to host name "localhost".
+
+{noformat}
+org.apache.hadoop.security.token.SecretManager$InvalidToken: Given Container container_0_0000_01_000000 identifier is not valid for current Node manager. Expected : 127.0.0.1:12345 Found : localhost:12345
+{noformat}
+
+{noformat}
+testNMConnectionToRM(org.apache.hadoop.yarn.server.nodemanager.TestNodeStatusUpdater)  Time elapsed: 8343 sec  &lt;&lt;&lt; FAILURE!
+org.junit.ComparisonFailure: expected:&lt;[localhost]:12345&gt; but was:&lt;[127.0.0.1]:12345&gt;
+	at org.junit.Assert.assertEquals(Assert.java:125)
+	at org.junit.Assert.assertEquals(Assert.java:147)
+	at org.apache.hadoop.yarn.server.nodemanager.TestNodeStatusUpdater$MyResourceTracker6.registerNodeManager(TestNodeStatusUpdater.java:712)
+	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
+	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
+	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
+	at java.lang.reflect.Method.invoke(Method.java:597)
+	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
+	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
+	at $Proxy26.registerNodeManager(Unknown Source)
+	at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:212)
+	at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.serviceStart(NodeStatusUpdaterImpl.java:149)
+	at org.apache.hadoop.yarn.server.nodemanager.TestNodeStatusUpdater$MyNodeStatusUpdater4.serviceStart(TestNodeStatusUpdater.java:369)
+	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
+	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:101)
+	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:213)
+	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
+	at org.apache.hadoop.yarn.server.nodemanager.TestNodeStatusUpdater.testNMConnectionToRM(TestNodeStatusUpdater.java:985)
+{noformat}</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1077">YARN-1077</a>.
+     Minor bug reported by Chuan Liu and fixed by Chuan Liu <br>
+     <b>TestContainerLaunch fails on Windows</b><br>
+     <blockquote>Several cases in this unit tests fail on Windows. (Append error log at the end.)
+
+testInvalidEnvSyntaxDiagnostics fails because the difference between cmd and bash script error handling. If some command fails in the cmd script, cmd will continue execute the the rest of the script command. Error handling needs to be explicitly carried out in the script file. The error code of the last command will be returned as the error code of the whole script. In this test, some error happened in the middle of the cmd script, the test expect an exception and non-zero error code. In the cmd script, the intermediate errors are ignored. The last command "call" succeeded and there is no exception.
+
+testContainerLaunchStdoutAndStderrDiagnostics fails due to wrong cmd commands used by the test.
+
+testContainerEnvVariables and testDelayedKill fail due to a regression from YARN-906.
+
+{noformat}
+-------------------------------------------------------------------------------
+Test set: org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
+-------------------------------------------------------------------------------
+Tests run: 7, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 11.526 sec &lt;&lt;&lt; FAILURE!
+testInvalidEnvSyntaxDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)  Time elapsed: 583 sec  &lt;&lt;&lt; FAILURE!
+junit.framework.AssertionFailedError: Should catch exception
+	at junit.framework.Assert.fail(Assert.java:50)
+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testInvalidEnvSyntaxDiagnostics(TestContainerLaunch.java:269)
+...
+
+testContainerLaunchStdoutAndStderrDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)  Time elapsed: 561 sec  &lt;&lt;&lt; FAILURE!
+junit.framework.AssertionFailedError: Should catch exception
+	at junit.framework.Assert.fail(Assert.java:50)
+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerLaunchStdoutAndStderrDiagnostics(TestContainerLaunch.java:314)
+...
+
+testContainerEnvVariables(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)  Time elapsed: 4136 sec  &lt;&lt;&lt; FAILURE!
+junit.framework.AssertionFailedError: expected:&lt;137&gt; but was:&lt;143&gt;
+	at junit.framework.Assert.fail(Assert.java:50)
+	at junit.framework.Assert.failNotEquals(Assert.java:287)
+	at junit.framework.Assert.assertEquals(Assert.java:67)
+	at junit.framework.Assert.assertEquals(Assert.java:199)
+	at junit.framework.Assert.assertEquals(Assert.java:205)
+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:500)
+...
+
+testDelayedKill(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)  Time elapsed: 2744 sec  &lt;&lt;&lt; FAILURE!
+junit.framework.AssertionFailedError: expected:&lt;137&gt; but was:&lt;143&gt;
+	at junit.framework.Assert.fail(Assert.java:50)
+	at junit.framework.Assert.failNotEquals(Assert.java:287)
+	at junit.framework.Assert.assertEquals(Assert.java:67)
+	at junit.framework.Assert.assertEquals(Assert.java:199)
+	at junit.framework.Assert.assertEquals(Assert.java:205)
+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testDelayedKill(TestContainerLaunch.java:601)
+...
+{noformat}
+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1074">YARN-1074</a>.
+     Major improvement reported by Tassapol Athiapinya and fixed by Xuan Gong (client)<br>
+     <b>Clean up YARN CLI app list to show only running apps.</b><br>
+     <blockquote>Once a user brings up YARN daemon, runs jobs, jobs will stay in output returned by $ yarn application -list even after jobs complete already. We want YARN command line to clean up this list. Specifically, we want to remove applications with FINISHED state(not Final-State) or KILLED state from the result.
+
+{code}
+[user1@host1 ~]$ yarn application -list
+Total Applications:150
+                Application-Id	    Application-Name	    Application-Type	      User	     Queue	             State       Final-State	       Progress	                       Tracking-URL
+application_1374638600275_0109	           Sleep job	           MAPREDUCE	    user1	   default	            KILLED            KILLED	           100%	   host1:54059
+application_1374638600275_0121	           Sleep job	           MAPREDUCE	    user1	   default	          FINISHED         SUCCEEDED	           100%	host1:19888/jobhistory/job/job_1374638600275_0121
+application_1374638600275_0020	           Sleep job	           MAPREDUCE	    user1	   default	          FINISHED         SUCCEEDED	           100%	host1:19888/jobhistory/job/job_1374638600275_0020
+application_1374638600275_0038	           Sleep job	           MAPREDUCE	    user1	   default	
+....
+{code}
+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1049">YARN-1049</a>.
+     Blocker bug reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (api)<br>
+     <b>ContainerExistStatus should define a status for preempted containers</b><br>
+     <blockquote>With the current behavior is impossible to determine if a container has been preempted or lost due to a NM crash.
+
+Adding a PREEMPTED exit status (-102) will help an AM determine that a container has been preempted.
+
+Note the change of scope from the original summary/description. The original scope proposed API/behavior changes. Because we are passed 2.1.0-beta I'm reducing the scope of this JIRA.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1034">YARN-1034</a>.
+     Trivial task reported by Sandy Ryza and fixed by Karthik Kambatla (documentation , scheduler)<br>
+     <b>Remove "experimental" in the Fair Scheduler documentation</b><br>
+     <blockquote>The YARN Fair Scheduler is largely stable now, and should no longer be declared experimental.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1025">YARN-1025</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (nodemanager , resourcemanager)<br>
+     <b>ResourceManager and NodeManager do not load native libraries on Windows.</b><br>
+     <blockquote>ResourceManager and NodeManager do not have the correct setting for java.library.path when launched on Windows.  This prevents the processes from loading native code from hadoop.dll.  The native code is required for correct functioning on Windows (not optional), so this ultimately can cause failures.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1008">YARN-1008</a>.
+     Major bug reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (nodemanager)<br>
+     <b>MiniYARNCluster with multiple nodemanagers, all nodes have same key for allocations</b><br>
+     <blockquote>While the NMs are keyed using the NodeId, the allocation is done based on the hostname. 
+
+This makes the different nodes indistinguishable to the scheduler.
+
+There should be an option to enabled the host:port instead just port for allocations. The nodes reported to the AM should report the 'key' (host or host:port). 
+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1006">YARN-1006</a>.
+     Major bug reported by Jian He and fixed by Xuan Gong <br>
+     <b>Nodes list web page on the RM web UI is broken</b><br>
+     <blockquote>The nodes web page which list all the connected nodes of the cluster is broken.
+
+1. The page is not showing in correct format/style.
+2. If we restart the NM, the node list is not refreshed, but just add the new started NM to the list. The old NMs information still remain.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-1001">YARN-1001</a>.
+     Blocker task reported by Srimanth Gunturi and fixed by Zhijie Shen (api)<br>
+     <b>YARN should provide per application-type and state statistics</b><br>
+     <blockquote>In Ambari we plan to show for MR2 the number of applications finished, running, waiting, etc. It would be efficient if YARN could provide per application-type and state aggregated counts.
+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-994">YARN-994</a>.
+     Major bug reported by Xuan Gong and fixed by Xuan Gong <br>
+     <b>HeartBeat thread in AMRMClientAsync does not handle runtime exception correctly</b><br>
+     <blockquote>YARN-654 performs sanity checks for parameters of public methods in AMRMClient. Those may create runtime exception. 
+Currently, heartBeat thread in AMRMClientAsync only captures IOException and YarnException, and will not handle Runtime Exception properly. 
+Possible solution can be: heartbeat thread will catch throwable and notify the callbackhandler thread via existing savedException</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-981">YARN-981</a>.
+     Major bug reported by Xuan Gong and fixed by Jian He <br>
+     <b>YARN/MR2/Job-history /logs link does not have correct content</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-966">YARN-966</a>.
+     Major bug reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>The thread of ContainerLaunch#call will fail without any signal if getLocalizedResources() is called when the container is not at LOCALIZED</b><br>
+     <blockquote>In ContainerImpl.getLocalizedResources(), there's:
+{code}
+assert ContainerState.LOCALIZED == getContainerState(); // TODO: FIXME!!
+{code}
+
+ContainerImpl.getLocalizedResources() is called in ContainerLaunch.call(), which is scheduled on a separate thread. If the container is not at LOCALIZED (e.g. it is at KILLING, see YARN-906), an AssertError will be thrown and fails the thread without notifying NM. Therefore, the container cannot receive more events, which are supposed to be sent from ContainerLaunch.call(), and move towards completion. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-957">YARN-957</a>.
+     Blocker bug reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
+     <b>Capacity Scheduler tries to reserve the memory more than what node manager reports.</b><br>
+     <blockquote>I have 2 node managers.
+* one with 1024 MB memory.(nm1)
+* second with 2048 MB memory.(nm2)
+I am submitting simple map reduce application with 1 mapper and one reducer with 1024mb each. The steps to reproduce this are
+* stop nm2 with 2048MB memory.( This I am doing to make sure that this node's heartbeat doesn't reach RM first).
+* now submit application. As soon as it receives first node's (nm1) heartbeat it will try to reserve memory for AM-container (2048MB). However it has only 1024MB of memory.
+* now start nm2 with 2048 MB memory.
+
+It hangs forever... Ideally this has two potential issues.
+* It should not try to reserve memory on a node manager which is never going to give requested memory. i.e. Current max capability of node manager is 1024MB but 2048MB is reserved on it. But it still does that.
+* Say 2048MB is reserved on nm1 but nm2 comes back with 2048MB available memory. In this case if the original request was made without any locality then scheduler should unreserve memory on nm1 and allocate requested 2048MB container on nm2.
+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-948">YARN-948</a>.
+     Major bug reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
+     <b>RM should validate the release container list before actually releasing them</b><br>
+     <blockquote>At present we are blinding passing the allocate request containing containers to be released to the scheduler. This may result into one application releasing another application's container.
+
+{code}
+  @Override
+  @Lock(Lock.NoLock.class)
+  public Allocation allocate(ApplicationAttemptId applicationAttemptId,
+      List&lt;ResourceRequest&gt; ask, List&lt;ContainerId&gt; release, 
+      List&lt;String&gt; blacklistAdditions, List&lt;String&gt; blacklistRemovals) {
+
+    FiCaSchedulerApp application = getApplication(applicationAttemptId);
+....
+....
+    // Release containers
+    for (ContainerId releasedContainerId : release) {
+      RMContainer rmContainer = getRMContainer(releasedContainerId);
+      if (rmContainer == null) {
+         RMAuditLogger.logFailure(application.getUser(),
+             AuditConstants.RELEASE_CONTAINER, 
+             "Unauthorized access or invalid container", "CapacityScheduler",
+             "Trying to release container not owned by app or with invalid id",
+             application.getApplicationId(), releasedContainerId);
+      }
+      completedContainer(rmContainer,
+          SchedulerUtils.createAbnormalContainerStatus(
+              releasedContainerId, 
+              SchedulerUtils.RELEASED_CONTAINER),
+          RMContainerEventType.RELEASED);
+    }
+{code}
+
+Current checks are not sufficient and we should prevent this..... thoughts?</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-942">YARN-942</a>.
+     Major bug reported by Sandy Ryza and fixed by Akira AJISAKA (scheduler)<br>
+     <b>In Fair Scheduler documentation, inconsistency on which properties have prefix</b><br>
+     <blockquote>locality.threshold.node and locality.threshold.rack should have the yarn.scheduler.fair prefix like the items before them
+
+http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-910">YARN-910</a>.
+     Major improvement reported by Sandy Ryza and fixed by Alejandro Abdelnur (nodemanager)<br>
+     <b>Allow auxiliary services to listen for container starts and completions</b><br>
+     <blockquote>Making container start and completion events available to auxiliary services would allow them to be resource-aware.  The auxiliary service would be able to notify a co-located service that is opportunistically using free capacity of allocation changes.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-906">YARN-906</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Cancelling ContainerLaunch#call at KILLING causes that the container cannot be completed</b><br>
+     <blockquote>See https://builds.apache.org/job/PreCommit-YARN-Build/1435//testReport/org.apache.hadoop.yarn.client.api.impl/TestNMClient/testNMClientNoCleanupOnStop/</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-903">YARN-903</a>.
+     Major bug reported by Abhishek Kapoor and fixed by Omkar Vinit Joshi (applications/distributed-shell)<br>
+     <b>DistributedShell throwing Errors in logs after successfull completion</b><br>
+     <blockquote>I have tried running DistributedShell and also used ApplicationMaster of the same for my test.
+The application is successfully running through logging some errors which would be useful to fix.
+Below are the logs from NodeManager and ApplicationMasterode
+
+Log Snippet for NodeManager
+=============================
+2013-07-07 13:39:18,787 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Connecting to ResourceManager at localhost/127.0.0.1:9990. current no. of attempts is 1
+2013-07-07 13:39:19,050 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecretManager: Rolling master-key for container-tokens, got key with id -325382586
+2013-07-07 13:39:19,052 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretManagerInNM: Rolling master-key for nm-tokens, got key with id :1005046570
+2013-07-07 13:39:19,053 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registered with ResourceManager as sunny-Inspiron:9993 with total resource of &lt;memory:10240, vCores:8&gt;
+2013-07-07 13:39:19,053 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Notifying ContainerManager to unblock new container-requests
+2013-07-07 13:39:35,256 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1373184544832_0001_000001 (auth:SIMPLE)
+2013-07-07 13:39:35,492 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1373184544832_0001_01_000001 by user sunny
+2013-07-07 13:39:35,507 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1373184544832_0001
+2013-07-07 13:39:35,511 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=sunny	IP=127.0.0.1	OPERATION=Start Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1373184544832_0001	CONTAINERID=container_1373184544832_0001_01_000001
+2013-07-07 13:39:35,511 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1373184544832_0001 transitioned from NEW to INITING
+2013-07-07 13:39:35,512 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1373184544832_0001_01_000001 to application application_1373184544832_0001
+2013-07-07 13:39:35,518 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1373184544832_0001 transitioned from INITING to RUNNING
+2013-07-07 13:39:35,528 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1373184544832_0001_01_000001 transitioned from NEW to LOCALIZING
+2013-07-07 13:39:35,540 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:9000/application/test.jar transitioned from INIT to DOWNLOADING
+2013-07-07 13:39:35,540 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1373184544832_0001_01_000001
+2013-07-07 13:39:35,675 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /home/sunny/Hadoop2/hadoopdata/nodemanagerdata/nmPrivate/container_1373184544832_0001_01_000001.tokens. Credentials list: 
+2013-07-07 13:39:35,694 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user sunny
+2013-07-07 13:39:35,803 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /home/sunny/Hadoop2/hadoopdata/nodemanagerdata/nmPrivate/container_1373184544832_0001_01_000001.tokens to /home/sunny/Hadoop2/hadoopdata/nodemanagerdata/usercache/sunny/appcache/application_1373184544832_0001/container_1373184544832_0001_01_000001.tokens
+2013-07-07 13:39:35,803 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: CWD set to /home/sunny/Hadoop2/hadoopdata/nodemanagerdata/usercache/sunny/appcache/application_1373184544832_0001 = file:/home/sunny/Hadoop2/hadoopdata/nodemanagerdata/usercache/sunny/appcache/application_1373184544832_0001
+2013-07-07 13:39:36,136 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000, 
+2013-07-07 13:39:36,406 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://localhost:9000/application/test.jar transitioned from DOWNLOADING to LOCALIZED
+2013-07-07 13:39:36,409 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1373184544832_0001_01_000001 transitioned from LOCALIZING to LOCALIZED
+2013-07-07 13:39:36,524 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1373184544832_0001_01_000001 transitioned from LOCALIZED to RUNNING
+2013-07-07 13:39:36,692 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, -c, /home/sunny/Hadoop2/hadoopdata/nodemanagerdata/usercache/sunny/appcache/application_1373184544832_0001/container_1373184544832_0001_01_000001/default_container_executor.sh]
+2013-07-07 13:39:37,144 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000, 
+2013-07-07 13:39:38,147 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000, 
+2013-07-07 13:39:39,151 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000, 
+2013-07-07 13:39:39,209 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1373184544832_0001_01_000001
+2013-07-07 13:39:39,259 WARN org.apache.hadoop.yarn.util.ProcfsBasedProcessTree: Unexpected: procfs stat file is not in the expected format for process with pid 11552
+2013-07-07 13:39:39,264 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 29524 for container-id container_1373184544832_0001_01_000001: 79.9 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used
+2013-07-07 13:39:39,645 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1373184544832_0001_000001 (auth:SIMPLE)
+2013-07-07 13:39:39,651 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1373184544832_0001_01_000002 by user sunny
+2013-07-07 13:39:39,651 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=sunny	IP=127.0.0.1	OPERATION=Start Container Request	TARGET=ContainerManageImpl	RESULT=SUCCESS	APPID=application_1373184544832_0001	CONTAINERID=container_1373184544832_0001_01_000002
+2013-07-07 13:39:39,651 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1373184544832_0001_01_000002 to application application_1373184544832_0001
+2013-07-07 13:39:39,652 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1373184544832_0001_01_000002 transitioned from NEW to LOCALIZED
+2013-07-07 13:39:39,660 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Getting container-status for container_1373184544832_0001_01_000002
+2013-07-07 13:39:39,661 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Returning container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 2, }, state: C_RUNNING, diagnostics: "", exit_status: -1000, 
+2013-07-07 13:39:39,728 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1373184544832_0001_01_000002 transitioned from LOCALIZED to RUNNING
+2013-07-07 13:39:39,873 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, -c, /home/sunny/Hadoop2/hadoopdata/nodemanagerdata/usercache/sunny/appcache/application_1373184544832_0001/container_1373184544832_0001_01_000002/default_container_executor.sh]
+2013-07-07 13:39:39,898 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container container_1373184544832_0001_01_000002 succeeded 
+2013-07-07 13:39:39,899 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1373184544832_0001_01_000002 transitioned from RUNNING to EXITED_WITH_SUCCESS
+2013-07-07 13:39:39,900 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1373184544832_0001_01_000002
+2013-07-07 13:39:39,942 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=sunny	OPERATION=Container Finished - Succeeded	TARGET=ContainerImpl	RESULT=SUCCESS	APPID=application_1373184544832_0001	CONTAINERID=container_1373184544832_0001_01_000002
+2013-07-07 13:39:39,943 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1373184544832_0001_01_000002 transitioned from EXITED_WITH_SUCCESS to DONE
+2013-07-07 13:39:39,944 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1373184544832_0001_01_000002 from application application_1373184544832_0001
+2013-07-07 13:39:40,155 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000, 
+2013-07-07 13:39:40,157 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 2, }, state: C_COMPLETE, diagnostics: "", exit_status: 0, 
+2013-07-07 13:39:40,158 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1373184544832_0001_01_000002
+2013-07-07 13:39:40,683 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Getting container-status for container_1373184544832_0001_01_000002
+2013-07-07 13:39:40,686 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:appattempt_1373184544832_0001_000001 (auth:TOKEN) cause:org.apache.hadoop.yarn.exceptions.YarnException: Container container_1373184544832_0001_01_000002 is not handled by this NodeManager
+2013-07-07 13:39:40,687 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9993, call org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.stopContainer from 127.0.0.1:51085: error: org.apache.hadoop.yarn.exceptions.YarnException: Container container_1373184544832_0001_01_000002 is not handled by this NodeManager
+org.apache.hadoop.yarn.exceptions.YarnException: Container container_1373184544832_0001_01_000002 is not handled by this NodeManager
+	at org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:45)
+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.authorizeGetAndStopContainerRequest(ContainerManagerImpl.java:614)
+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.stopContainer(ContainerManagerImpl.java:538)
+	at org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.stopContainer(ContainerManagementProtocolPBServiceImpl.java:88)
+	at org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:85)
+	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
+	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033)
+	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1868)
+	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1864)
+	at java.security.AccessController.doPrivileged(Native Method)
+	at javax.security.auth.Subject.doAs(Subject.java:396)
+	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
+	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1862)
+2013-07-07 13:39:41,162 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 1, }, state: C_RUNNING, diagnostics: "", exit_status: -1000, 
+2013-07-07 13:39:41,691 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Container container_1373184544832_0001_01_000001 succeeded 
+2013-07-07 13:39:41,692 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1373184544832_0001_01_000001 transitioned from RUNNING to EXITED_WITH_SUCCESS
+2013-07-07 13:39:41,692 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1373184544832_0001_01_000001
+2013-07-07 13:39:41,714 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=sunny	OPERATION=Container Finished - Succeeded	TARGET=ContainerImpl	RESULT=SUCCESS	APPID=application_1373184544832_0001	CONTAINERID=container_1373184544832_0001_01_000001
+2013-07-07 13:39:41,714 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1373184544832_0001_01_000001 transitioned from EXITED_WITH_SUCCESS to DONE
+2013-07-07 13:39:41,714 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1373184544832_0001_01_000001 from application application_1373184544832_0001
+2013-07-07 13:39:42,166 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out status for container: container_id {, app_attempt_id {, application_id {, id: 1, cluster_timestamp: 1373184544832, }, attemptId: 1, }, id: 1, }, state: C_COMPLETE, diagnostics: "", exit_status: 0, 
+2013-07-07 13:39:42,166 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed container container_1373184544832_0001_01_000001
+2013-07-07 13:39:42,191 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1373184544832_0001_000001 (auth:SIMPLE)
+2013-07-07 13:39:42,195 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Getting container-status for container_1373184544832_0001_01_000001
+2013-07-07 13:39:42,196 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:appattempt_1373184544832_0001_000001 (auth:TOKEN) cause:org.apache.hadoop.yarn.exceptions.YarnException: Container container_1373184544832_0001_01_000001 is not handled by this NodeManager
+2013-07-07 13:39:42,196 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9993, call org.apache.hadoop.yarn.api.ContainerManagementProtocolPB.stopContainer from 127.0.0.1:51086: error: org.apache.hadoop.yarn.exceptions.YarnException: Container container_1373184544832_0001_01_000001 is not handled by this NodeManager
+org.apache.hadoop.yarn.exceptions.YarnException: Container container_1373184544832_0001_01_000001 is not handled by this NodeManager
+	at org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:45)
+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.authorizeGetAndStopContainerRequest(ContainerManagerImpl.java:614)
+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.stopContainer(ContainerManagerImpl.java:538)
+	at org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.stopContainer(ContainerManagementProtocolPBServiceImpl.java:88)
+	at org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:85)
+	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
+	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033)
+	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1868)
+	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1864)
+	at java.security.AccessController.doPrivileged(Native Method)
+	at javax.security.auth.Subject.doAs(Subject.java:396)
+	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
+	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1862)
+2013-07-07 13:39:42,264 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1373184544832_0001_01_000002
+2013-07-07 13:39:42,265 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1373184544832_0001_01_000002
+2013-07-07 13:39:42,265 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1373184544832_0001_01_000001
+2013-07-07 13:39:43,173 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1373184544832_0001 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
+2013-07-07 13:39:43,174 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1373184544832_0001
+2013-07-07 13:39:43,180 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1373184544832_0001 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
+2013-07-07 13:39:43,180 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1373184544832_0001, with delay of 10800 seconds
+
+
+Log Snippet for Application Manager
+==================================
+13/07/07 13:39:36 INFO client.SimpleApplicationMaster: Initializing ApplicationMaster
+13/07/07 13:39:37 INFO client.SimpleApplicationMaster: Application master for app, appId=1, clustertimestamp=1373184544832, attemptId=1
+13/07/07 13:39:37 INFO client.SimpleApplicationMaster: Starting ApplicationMaster
+13/07/07 13:39:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
+13/07/07 13:39:37 INFO impl.NMClientAsyncImpl: Upper bound of the thread pool size is 500
+13/07/07 13:39:37 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-nodemanagers-proxies : 500
+13/07/07 13:39:37 INFO client.SimpleApplicationMaster: Max mem capabililty of resources in this cluster 8192
+13/07/07 13:39:37 INFO client.SimpleApplicationMaster: Requested container ask: Capability[&lt;memory:100, vCores:0&gt;]Priority[0]ContainerCount[1]
+13/07/07 13:39:39 INFO client.SimpleApplicationMaster: Got response from RM for container ask, allocatedCnt=1
+13/07/07 13:39:39 INFO client.SimpleApplicationMaster: Launching shell command on a new container., containerId=container_1373184544832_0001_01_000002, containerNode=sunny-Inspiron:9993, containerNodeURI=sunny-Inspiron:8042, containerResourceMemory1024
+13/07/07 13:39:39 INFO client.SimpleApplicationMaster: Setting up container launch container for containerid=container_1373184544832_0001_01_000002
+13/07/07 13:39:39 INFO impl.NMClientAsyncImpl: Processing Event EventType: START_CONTAINER for Container container_1373184544832_0001_01_000002
+13/07/07 13:39:39 INFO impl.ContainerManagementProtocolProxy: Opening proxy : sunny-Inspiron:9993
+13/07/07 13:39:39 INFO client.SimpleApplicationMaster: Succeeded to start Container container_1373184544832_0001_01_000002
+13/07/07 13:39:39 INFO impl.NMClientAsyncImpl: Processing Event EventType: QUERY_CONTAINER for Container container_1373184544832_0001_01_000002
+13/07/07 13:39:40 INFO client.SimpleApplicationMaster: Got response from RM for container ask, completedCnt=1
+13/07/07 13:39:40 INFO client.SimpleApplicationMaster: Got container status for containerID=container_1373184544832_0001_01_000002, state=COMPLETE, exitStatus=0, diagnostics=
+13/07/07 13:39:40 INFO client.SimpleApplicationMaster: Container completed successfully., containerId=container_1373184544832_0001_01_000002
+13/07/07 13:39:40 INFO client.SimpleApplicationMaster: Application completed. Stopping running containers
+13/07/07 13:39:40 ERROR impl.NMClientImpl: Failed to stop Container container_1373184544832_0001_01_000002when stopping NMClientImpl
+13/07/07 13:39:40 INFO impl.ContainerManagementProtocolProxy: Closing proxy : sunny-Inspiron:9993
+13/07/07 13:39:40 INFO client.SimpleApplicationMaster: Application completed. Signalling finish to RM
+13/07/07 13:39:41 INFO impl.AMRMClientAsyncImpl: Interrupted while waiting for queue
+java.lang.InterruptedException
+	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1899)
+	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1934)
+	at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399)
+	at org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:281)
+13/07/07 13:39:41 INFO client.SimpleApplicationMaster: Application Master completed successfully. exiting
+
+
+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-881">YARN-881</a>.
+     Major bug reported by Jian He and fixed by Jian He <br>
+     <b>Priority#compareTo method seems to be wrong.</b><br>
+     <blockquote>if lower int value means higher priority, shouldn't we "return other.getPriority() - this.getPriority() " </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-771">YARN-771</a>.
+     Major sub-task reported by Bikas Saha and fixed by Junping Du <br>
+     <b>AMRMClient  support for resource blacklisting</b><br>
+     <blockquote>After YARN-750 AMRMClient should support blacklisting via the new YARN API's</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-758">YARN-758</a>.
+     Minor improvement reported by Bikas Saha and fixed by Karthik Kambatla <br>
+     <b>Augment MockNM to use multiple cores</b><br>
+     <blockquote>YARN-757 got fixed by changing the scheduler from Fair to default (which is capacity).</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-707">YARN-707</a>.
+     Blocker improvement reported by Bikas Saha and fixed by Jason Lowe <br>
+     <b>Add user info in the YARN ClientToken</b><br>
+     <blockquote>If user info is present in the client token then it can be used to do limited authz in the AM.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-696">YARN-696</a>.
+     Major improvement reported by Trevor Lorimer and fixed by Trevor Lorimer (resourcemanager)<br>
+     <b>Enable multiple states to to be specified in Resource Manager apps REST call</b><br>
+     <blockquote>Within the YARN Resource Manager REST API the GET call which returns all Applications can be filtered by a single State query parameter (http://&lt;rm http address:port&gt;/ws/v1/cluster/apps). 
+
+There are 8 possible states (New, Submitted, Accepted, Running, Finishing, Finished, Failed, Killed), if no state parameter is specified all states are returned, however if a sub-set of states is required then multiple REST calls are required (max. of 7).
+
+The proposal is to be able to specify multiple states in a single REST call.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-643">YARN-643</a>.
+     Major bug reported by Jian He and fixed by Xuan Gong <br>
+     <b>WHY appToken is removed both in BaseFinalTransition and AMUnregisteredTransition AND clientToken is removed in FinalTransition and not BaseFinalTransition</b><br>
+     <blockquote>The jira is tracking why appToken and clientToAMToken is removed separately, and why they are distributed in different transitions, ideally there may be a common place where these two tokens can be removed at the same time. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-602">YARN-602</a>.
+     Major bug reported by Xuan Gong and fixed by Kenji Kikushima <br>
+     <b>NodeManager should mandatorily set some Environment variables into every containers that it launches</b><br>
+     <blockquote>NodeManager should mandatorily set some Environment variables into every containers that it launches, such as Environment.user, Environment.pwd. If both users and NodeManager set those variables, the value set by NM should be used </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-589">YARN-589</a>.
+     Major improvement reported by Sandy Ryza and fixed by Sandy Ryza (scheduler)<br>
+     <b>Expose a REST API for monitoring the fair scheduler</b><br>
+     <blockquote>The fair scheduler should have an HTTP interface that exposes information such as applications per queue, fair shares, demands, current allocations.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-573">YARN-573</a>.
+     Critical sub-task reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
+     <b>Shared data structures in Public Localizer and Private Localizer are not Thread safe.</b><br>
+     <blockquote>PublicLocalizer
+1) pending accessed by addResource (part of event handling) and run method (as a part of PublicLocalizer.run() ).
+
+PrivateLocalizer
+1) pending accessed by addResource (part of event handling) and findNextResource (i.remove()). Also update method should be fixed. It too is sharing pending list.
+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-540">YARN-540</a>.
+     Major sub-task reported by Jian He and fixed by Jian He (resourcemanager)<br>
+     <b>Race condition causing RM to potentially relaunch already unregistered AMs on RM restart</b><br>
+     <blockquote>When job succeeds and successfully call finishApplicationMaster, RM shutdown and restart-dispatcher is stopped before it can process REMOVE_APP event. The next time RM comes back, it will reload the existing state files even though the job is succeeded</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-502">YARN-502</a>.
+     Major sub-task reported by Lohit Vijayarenu and fixed by Mayank Bansal <br>
+     <b>RM crash with NPE on NODE_REMOVED event with FairScheduler</b><br>
+     <blockquote>While running some test and adding/removing nodes, we see RM crashed with the below exception. We are testing with fair scheduler and running hadoop-2.0.3-alpha
+
+{noformat}
+2013-03-22 18:54:27,015 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Deactivating Node YYYY:55680 as it is now LOST
+2013-03-22 18:54:27,015 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: YYYY:55680 Node Transitioned from UNHEALTHY to LOST
+2013-03-22 18:54:27,015 FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in handling event type NODE_REMOVED to the scheduler
+java.lang.NullPointerException
+        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.removeNode(FairScheduler.java:619)
+        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:856)
+        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:98)
+        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:375)
+        at java.lang.Thread.run(Thread.java:662)
+2013-03-22 18:54:27,016 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
+2013-03-22 18:54:27,020 INFO org.mortbay.log: Stopped SelectChannelConnector@XXXX:50030
+{noformat}</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-337">YARN-337</a>.
+     Major bug reported by Jason Lowe and fixed by Jason Lowe (resourcemanager)<br>
+     <b>RM handles killed application tracking URL poorly</b><br>
+     <blockquote>When the ResourceManager kills an application, it leaves the proxy URL redirecting to the original tracking URL for the application even though the ApplicationMaster is no longer there to service it.  It should redirect it somewhere more useful, like the RM's web page for the application, where the user can find that the application was killed and links to the AM logs.
+
+In addition, sometimes the AM during teardown from the kill can attempt to unregister and provide an updated tracking URL, but unfortunately the RM has "forgotten" the AM due to the kill and refuses to process the unregistration.  Instead it logs:
+
+{noformat}
+2013-01-09 17:37:49,671 [IPC Server handler 2 on 8030] ERROR
+org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AppAttemptId doesnt exist in cache appattempt_1357575694478_28614_000001
+{noformat}
+
+It should go ahead and process the unregistration to update the tracking URL since the application offered it.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-292">YARN-292</a>.
+     Major sub-task reported by Devaraj K and fixed by Zhijie Shen (resourcemanager)<br>
+     <b>ResourceManager throws ArrayIndexOutOfBoundsException while handling CONTAINER_ALLOCATED for application attempt</b><br>
+     <blockquote>{code:xml}
+2012-12-26 08:41:15,030 ERROR org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler: Calling allocate on removed or non existant application appattempt_1356385141279_49525_000001
+2012-12-26 08:41:15,031 ERROR org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in handling event type CONTAINER_ALLOCATED for applicationAttempt application_1356385141279_49525
+java.lang.ArrayIndexOutOfBoundsException: 0
+	at java.util.Arrays$ArrayList.get(Arrays.java:3381)
+	at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:655)
+	at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:644)
+	at org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:357)
+	at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:298)
+	at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
+	at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
+	at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:490)
+	at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:80)
+	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:433)
+	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:414)
+	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
+	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
+	at java.lang.Thread.run(Thread.java:662)
+ {code}</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-107">YARN-107</a>.
+     Major bug reported by Devaraj K and fixed by Xuan Gong (resourcemanager)<br>
+     <b>ClientRMService.forceKillApplication() should handle the non-RUNNING applications properly</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5497">MAPREDUCE-5497</a>.
+     Major bug reported by Jian He and fixed by Jian He <br>
+     <b>'5s sleep'  in MRAppMaster.shutDownJob is only needed before stopping ClientService</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5493">MAPREDUCE-5493</a>.
+     Blocker bug reported by Jason Lowe and fixed by Jason Lowe (mrv2)<br>
+     <b>In-memory map outputs can be leaked after shuffle completes</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5483">MAPREDUCE-5483</a>.
+     Major bug reported by Alejandro Abdelnur and fixed by Robert Kanter (distcp)<br>
+     <b>revert MAPREDUCE-5357</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5478">MAPREDUCE-5478</a>.
+     Minor improvement reported by Sandy Ryza and fixed by Sandy Ryza (examples)<br>
+     <b>TeraInputFormat unnecessarily defines its own FileSplit subclass</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5476">MAPREDUCE-5476</a>.
+     Blocker bug reported by Jian He and fixed by Jian He <br>
+     <b>Job can fail when RM restarts after staging dir is cleaned but before MR successfully unregister with RM</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5475">MAPREDUCE-5475</a>.
+     Blocker bug reported by Jason Lowe and fixed by Jason Lowe (mr-am , mrv2)<br>
+     <b>MRClientService does not verify ACLs properly</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5470">MAPREDUCE-5470</a>.
+     Major bug reported by Chris Nauroth and fixed by Sandy Ryza <br>
+     <b>LocalJobRunner does not work on Windows.</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5468">MAPREDUCE-5468</a>.
+     Blocker bug reported by Yesha Vora and fixed by Vinod Kumar Vavilapalli <br>
+     <b>AM recovery does not work for map only jobs</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5466">MAPREDUCE-5466</a>.
+     Blocker bug reported by Yesha Vora and fixed by Jian He <br>
+     <b>Historyserver does not refresh the result of restarted jobs after RM restart</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5462">MAPREDUCE-5462</a>.
+     Major sub-task reported by Sandy Ryza and fixed by Sandy Ryza (performance , task)<br>
+     <b>In map-side sort, swap entire meta entries instead of indexes for better cache performance </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5454">MAPREDUCE-5454</a>.
+     Major bug reported by Karthik Kambatla and fixed by Karthik Kambatla (test)<br>
+     <b>TestDFSIO fails intermittently on JDK7</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5446">MAPREDUCE-5446</a>.
+     Major bug reported by Jason Lowe and fixed by Jason Lowe (mrv2 , test)<br>
+     <b>TestJobHistoryEvents and TestJobHistoryParsing have race conditions</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5441">MAPREDUCE-5441</a>.
+     Major bug reported by Rohith Sharma K S and fixed by Jian He (applicationmaster , client)<br>
+     <b>JobClient exit whenever RM issue Reboot command to 1st attempt App Master.</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5440">MAPREDUCE-5440</a>.
+     Major bug reported by Robert Parker and fixed by Robert Parker (mrv2)<br>
+     <b>TestCopyCommitter Fails on JDK7</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5428">MAPREDUCE-5428</a>.
+     Major bug reported by Jason Lowe and fixed by Karthik Kambatla (jobhistoryserver , mrv2)<br>
+     <b>HistoryFileManager doesn't stop threads when service is stopped</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5425">MAPREDUCE-5425</a>.
+     Major bug reported by Ashwin Shankar and fixed by Robert Parker (jobhistoryserver)<br>
+     <b>Junit in TestJobHistoryServer failing in jdk 7</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5414">MAPREDUCE-5414</a>.
+     Major bug reported by Nemon Lou and fixed by Nemon Lou (test)<br>
+     <b>TestTaskAttempt fails jdk7 with NullPointerException</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5385">MAPREDUCE-5385</a>.
+     Blocker bug reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
+     <b>JobContext cache files api are broken</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5379">MAPREDUCE-5379</a>.
+     Major improvement reported by Sandy Ryza and fixed by Karthik Kambatla (job submission , security)<br>
+     <b>Include token tracking ids in jobconf</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5367">MAPREDUCE-5367</a>.
+     Major improvement reported by Sandy Ryza and fixed by Sandy Ryza <br>
+     <b>Local jobs all use same local working directory</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5358">MAPREDUCE-5358</a>.
+     Major bug reported by Devaraj K and fixed by Devaraj K (mr-am)<br>
+     <b>MRAppMaster throws invalid transitions for JobImpl</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5317">MAPREDUCE-5317</a>.
+     Major bug reported by Ravi Prakash and fixed by Ravi Prakash (mrv2)<br>
+     <b>Stale files left behind for failed jobs</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5251">MAPREDUCE-5251</a>.
+     Major bug reported by Jason Lowe and fixed by Ashwin Shankar (mrv2)<br>
+     <b>Reducer should not implicate map attempt if it has insufficient space to fetch map output</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5164">MAPREDUCE-5164</a>.
+     Major bug reported by Nemon Lou and fixed by Nemon Lou <br>
+     <b>command  "mapred job" and "mapred queue" omit HADOOP_CLIENT_OPTS </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5020">MAPREDUCE-5020</a>.
+     Major bug reported by Trevor Robinson and fixed by Trevor Robinson (client)<br>
+     <b>Compile failure with JDK8</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5001">MAPREDUCE-5001</a>.
+     Major bug reported by Brock Noland and fixed by Sandy Ryza <br>
+     <b>LocalJobRunner has race condition resulting in job failures </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3193">MAPREDUCE-3193</a>.
+     Major bug reported by Ramgopal N and fixed by Devaraj K (mrv1 , mrv2)<br>
+     <b>FileInputFormat doesn't read files recursively in the input path dir</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1981">MAPREDUCE-1981</a>.
+     Major improvement reported by Hairong Kuang and fixed by Hairong Kuang (job submission)<br>
+     <b>Improve getSplits performance by using listLocatedStatus</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5199">HDFS-5199</a>.
+     Major sub-task reported by Brandon Li and fixed by Brandon Li (nfs)<br>
+     <b>Add more debug trace for NFS READ and WRITE</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5192">HDFS-5192</a>.
+     Minor bug reported by Jing Zhao and fixed by Jing Zhao <br>
+     <b>NameNode may fail to start when dfs.client.test.drop.namenode.response.number is set</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5159">HDFS-5159</a>.
+     Major bug reported by Aaron T. Myers and fixed by Aaron T. Myers (namenode)<br>
+     <b>Secondary NameNode fails to checkpoint if error occurs downloading edits on first checkpoint</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5150">HDFS-5150</a>.
+     Blocker bug reported by Kihwal Lee and fixed by Kihwal Lee <br>
+     <b>Allow per NN SPN for internal SPNEGO.</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5140">HDFS-5140</a>.
+     Blocker bug reported by Arpit Gupta and fixed by Jing Zhao (ha)<br>
+     <b>Too many safemode monitor threads being created in the standby namenode causing it to fail with out of memory error</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5136">HDFS-5136</a>.
+     Major sub-task reported by Brandon Li and fixed by Brandon Li (nfs)<br>
+     <b>MNT EXPORT should give the full group list which can mount the exports</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5132">HDFS-5132</a>.
+     Blocker bug reported by Arpit Gupta and fixed by Kihwal Lee (namenode)<br>
+     <b>Deadlock in NameNode between SafeModeMonitor#run and DatanodeManager#handleHeartbeat</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5128">HDFS-5128</a>.
+     Critical improvement reported by Kihwal Lee and fixed by Kihwal Lee <br>
+     <b>Allow multiple net interfaces to be used with HA namenode RPC server</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5124">HDFS-5124</a>.
+     Blocker bug reported by Deepesh Khandelwal and fixed by Daryn Sharp (namenode)<br>
+     <b>DelegationTokenSecretManager#retrievePassword can cause deadlock in NameNode</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5118">HDFS-5118</a>.
+     Major new feature reported by Jing Zhao and fixed by Jing Zhao <br>
+     <b>Provide testing support for DFSClient to drop RPC responses</b><br>
+     <blockquote>Used for testing when NameNode HA is enabled. Users can use a new configuration property "dfs.client.test.drop.namenode.response.number" to specify the number of responses that DFSClient will drop in each RPC call. This feature can help testing functionalities such as NameNode retry cache.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5111">HDFS-5111</a>.
+     Minor bug reported by Jing Zhao and fixed by Jing Zhao (snapshots)<br>
+     <b>Remove duplicated error message for snapshot commands when processing invalid arguments</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5110">HDFS-5110</a>.
+     Major sub-task reported by Brandon Li and fixed by Brandon Li (nfs)<br>
+     <b>Change FSDataOutputStream to HdfsDataOutputStream for opened streams to fix type cast error</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5107">HDFS-5107</a>.
+     Major sub-task reported by Brandon Li and fixed by Brandon Li (nfs)<br>
+     <b>Fix array copy error in Readdir and Readdirplus responses</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5106">HDFS-5106</a>.
+     Minor bug reported by Chuan Liu and fixed by Chuan Liu (test)<br>
+     <b>TestDatanodeBlockScanner fails on Windows due to incorrect path format</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5105">HDFS-5105</a>.
+     Minor bug reported by Chuan Liu and fixed by Chuan Liu <br>
+     <b>TestFsck fails on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5104">HDFS-5104</a>.
+     Major sub-task reported by Brandon Li and fixed by Brandon Li (nfs)<br>
+     <b>Support dotdot name in NFS LOOKUP operation</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5103">HDFS-5103</a>.
+     Minor bug reported by Chuan Liu and fixed by Chuan Liu (test)<br>
+     <b>TestDirectoryScanner fails on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5102">HDFS-5102</a>.
+     Major bug reported by Aaron T. Myers and fixed by Jing Zhao (snapshots)<br>
+     <b>Snapshot names should not be allowed to contain slash characters</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5100">HDFS-5100</a>.
+     Minor bug reported by Chuan Liu and fixed by Chuan Liu (test)<br>
+     <b>TestNamenodeRetryCache fails on Windows due to incorrect cleanup</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5099">HDFS-5099</a>.
+     Major bug reported by Chuan Liu and fixed by Chuan Liu (namenode)<br>
+     <b>Namenode#copyEditLogSegmentsToSharedDir should close EditLogInputStreams upon finishing</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5091">HDFS-5091</a>.
+     Minor bug reported by Jing Zhao and fixed by Jing Zhao <br>
+     <b>Support for spnego keytab separate from the JournalNode keytab for secure HA</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5085">HDFS-5085</a>.
+     Major sub-task reported by Brandon Li and fixed by Jing Zhao (nfs)<br>
+     <b>Refactor o.a.h.nfs to support different types of authentications</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5080">HDFS-5080</a>.
+     Major bug reported by Jing Zhao and fixed by Jing Zhao (ha , qjm)<br>
+     <b>BootstrapStandby not working with QJM when the existing NN is active</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5078">HDFS-5078</a>.
+     Major sub-task reported by Brandon Li and fixed by Brandon Li (nfs)<br>
+     <b>Support file append in NFSv3 gateway to enable data streaming to HDFS</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5076">HDFS-5076</a>.
+     Minor new feature reported by Jing Zhao and fixed by Jing Zhao <br>
+     <b>Add MXBean methods to query NN's transaction information and JournalNode's journal status</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5071">HDFS-5071</a>.
+     Major sub-task reported by Kihwal Lee and fixed by Brandon Li (nfs)<br>
+     <b>Change hdfs-nfs parent project to hadoop-project</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5069">HDFS-5069</a>.
+     Major sub-task reported by Brandon Li and fixed by Brandon Li (nfs)<br>
+     <b>Include hadoop-nfs and hadoop-hdfs-nfs into hadoop dist for NFS deployment</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5067">HDFS-5067</a>.
+     Major sub-task reported by Brandon Li and fixed by Brandon Li (nfs)<br>
+     <b>Support symlink operations</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5061">HDFS-5061</a>.
+     Major improvement reported by Arpit Agarwal and fixed by Arpit Agarwal (namenode)<br>
+     <b>Make FSNameSystem#auditLoggers an unmodifiable list</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5055">HDFS-5055</a>.
+     Blocker bug reported by Allen Wittenauer and fixed by Vinay (namenode)<br>
+     <b>nn fails to download checkpointed image from snn in some setups</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5047">HDFS-5047</a>.
+     Major bug reported by Kihwal Lee and fixed by Robert Parker (namenode)<br>
+     <b>Supress logging of full stack trace of quota and lease exceptions</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5045">HDFS-5045</a>.
+     Minor improvement reported by Jing Zhao and fixed by Jing Zhao <br>
+     <b>Add more unit tests for retry cache to cover all AtMostOnce methods</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5043">HDFS-5043</a>.
+     Major bug reported by Brandon Li and fixed by Brandon Li <br>
+     <b>For HdfsFileStatus, set default value of childrenNum to -1 instead of 0 to avoid confusing applications</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-5028">HDFS-5028</a>.
+     Major bug reported by zhaoyunjiong and fixed by zhaoyunjiong <br>
+     <b>LeaseRenewer throw java.util.ConcurrentModificationException when timeout</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4993">HDFS-4993</a>.
+     Major bug reported by Kihwal Lee and fixed by Robert Parker <br>
+     <b>fsck can fail if a file is renamed or deleted</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4962">HDFS-4962</a>.
+     Minor sub-task reported by Tsz Wo (Nicholas), SZE and fixed by Tsz Wo (Nicholas), SZE (nfs)<br>
+     <b>Use enum for nfs constants</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4947">HDFS-4947</a>.
+     Major sub-task reported by Brandon Li and fixed by Jing Zhao (nfs)<br>
+     <b>Add NFS server export table to control export by hostname or IP range</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4926">HDFS-4926</a>.
+     Trivial improvement reported by Joseph Lorenzini and fixed by Vivek Ganesan (namenode)<br>
+     <b>namenode webserver's page has a tooltip that is inconsistent with the datanode HTML link</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4905">HDFS-4905</a>.
+     Minor improvement reported by Arpit Agarwal and fixed by Arpit Agarwal (tools)<br>
+     <b>Add appendToFile command to "hdfs dfs"</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4898">HDFS-4898</a>.
+     Minor bug reported by Eric Sirianni and fixed by Tsz Wo (Nicholas), SZE (namenode)<br>
+     <b>BlockPlacementPolicyWithNodeGroup.chooseRemoteRack() fails to properly fallback to local rack</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4763">HDFS-4763</a>.
+     Major sub-task reported by Brandon Li and fixed by Brandon Li (nfs)<br>
+     <b>Add script changes/utility for starting NFS gateway</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4680">HDFS-4680</a>.
+     Major bug reported by Andrew Wang and fixed by Andrew Wang (namenode , security)<br>
+     <b>Audit logging of delegation tokens for MR tracing</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4632">HDFS-4632</a>.
+     Major bug reported by Chris Nauroth and fixed by Chuan Liu (test)<br>
+     <b>globStatus using backslash for escaping does not work on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4594">HDFS-4594</a>.
+     Minor bug reported by Arpit Gupta and fixed by Chris Nauroth (webhdfs)<br>
+     <b>WebHDFS open sets Content-Length header to what is specified by length parameter rather than how much data is actually returned. </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4329">HDFS-4329</a>.
+     Major bug reported by Andy Isaacson and fixed by Cristina L. Abad (hdfs-client)<br>
+     <b>DFSShell issues with directories with spaces in name</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3245">HDFS-3245</a>.
+     Major improvement reported by Todd Lipcon and fixed by Ravi Prakash (namenode)<br>
+     <b>Add metrics and web UI for cluster version summary</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2933">HDFS-2933</a>.
+     Major improvement reported by Philip Zeyliger and fixed by Vivek Ganesan (datanode)<br>
+     <b>Improve DataNode Web UI Index Page</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9962">HADOOP-9962</a>.
+     Major improvement reported by Roman Shaposhnik and fixed by Roman Shaposhnik (build)<br>
+     <b>in order to avoid dependency divergence within Hadoop itself lets enable DependencyConvergence</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9961">HADOOP-9961</a>.
+     Minor bug reported by Roman Shaposhnik and fixed by Roman Shaposhnik (build)<br>
+     <b>versions of a few transitive dependencies diverged between hadoop subprojects</b><br>
+     <blockquote></blockquote></li>

[... 155 lines stripped ...]