You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by ll...@apache.org on 2013/06/29 22:21:00 UTC

svn commit: r1498023 [2/2] - /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html

Modified: hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html?rev=1498023&r1=1498022&r2=1498023&view=diff
==============================================================================
--- hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html (original)
+++ hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html Sat Jun 29 20:20:59 2013
@@ -23,45 +23,45 @@ These release notes include new develope
 <li> <a href="https://issues.apache.org/jira/browse/YARN-861">YARN-861</a>.
      Critical bug reported by Devaraj K and fixed by Vinod Kumar Vavilapalli (nodemanager)<br>
      <b>TestContainerManager is failing</b><br>
-     <blockquote>https://builds.apache.org/job/Hadoop-Yarn-trunk/246/
-
-{code:xml}
-Running org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager
-Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 19.249 sec &lt;&lt;&lt; FAILURE!
-testContainerManagerInitialization(org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager)  Time elapsed: 286 sec  &lt;&lt;&lt; FAILURE!
-junit.framework.ComparisonFailure: expected:&lt;[asf009.sp2.ygridcore.ne]t&gt; but was:&lt;[localhos]t&gt;
-	at junit.framework.Assert.assertEquals(Assert.java:85)
-
+     <blockquote>https://builds.apache.org/job/Hadoop-Yarn-trunk/246/
+
+{code:xml}
+Running org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager
+Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 19.249 sec &lt;&lt;&lt; FAILURE!
+testContainerManagerInitialization(org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager)  Time elapsed: 286 sec  &lt;&lt;&lt; FAILURE!
+junit.framework.ComparisonFailure: expected:&lt;[asf009.sp2.ygridcore.ne]t&gt; but was:&lt;[localhos]t&gt;
+	at junit.framework.Assert.assertEquals(Assert.java:85)
+
 {code}</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-854">YARN-854</a>.
      Blocker bug reported by Ramya Sunil and fixed by Omkar Vinit Joshi <br>
      <b>App submission fails on secure deploy</b><br>
-     <blockquote>App submission on secure cluster fails with the following exception:
-
-{noformat}
-INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application applicationID failed 2 times due to AM Container for appattemptID exited with  exitCode: -1000 due to: App initialization failed (255) with output: main : command provided 0
-main : user is qa_user
-javax.security.sasl.SaslException: DIGEST-MD5: digest response format violation. Mismatched response. [Caused by org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): DIGEST-MD5: digest response format violation. Mismatched response.]
-	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
-	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
-	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
-	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
-	at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
-	at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
-	at org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)
-	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
-	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
-	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)
-Caused by: org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): DIGEST-MD5: digest response format violation. Mismatched response.
-	at org.apache.hadoop.ipc.Client.call(Client.java:1298)
-	at org.apache.hadoop.ipc.Client.call(Client.java:1250)
-	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
-	at $Proxy7.heartbeat(Unknown Source)
-	at org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
-	... 3 more
-
-.Failing this attempt.. Failing the application.
-
+     <blockquote>App submission on secure cluster fails with the following exception:
+
+{noformat}
+INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application applicationID failed 2 times due to AM Container for appattemptID exited with  exitCode: -1000 due to: App initialization failed (255) with output: main : command provided 0
+main : user is qa_user
+javax.security.sasl.SaslException: DIGEST-MD5: digest response format violation. Mismatched response. [Caused by org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): DIGEST-MD5: digest response format violation. Mismatched response.]
+	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
+	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
+	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
+	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
+	at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
+	at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
+	at org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)
+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)
+Caused by: org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): DIGEST-MD5: digest response format violation. Mismatched response.
+	at org.apache.hadoop.ipc.Client.call(Client.java:1298)
+	at org.apache.hadoop.ipc.Client.call(Client.java:1250)
+	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
+	at $Proxy7.heartbeat(Unknown Source)
+	at org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
+	... 3 more
+
+.Failing this attempt.. Failing the application.
+
 {noformat}</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-852">YARN-852</a>.
      Minor bug reported by Chuan Liu and fixed by Chuan Liu <br>
@@ -78,8 +78,8 @@ Caused by: org.apache.hadoop.ipc.RemoteE
 <li> <a href="https://issues.apache.org/jira/browse/YARN-848">YARN-848</a>.
      Major bug reported by Hitesh Shah and fixed by Hitesh Shah <br>
      <b>Nodemanager does not register with RM using the fully qualified hostname</b><br>
-     <blockquote>If the hostname is misconfigured to not be fully qualified ( i.e. hostname returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering with the RM using only "foo". This can create problems if DNS cannot resolve the hostname properly. 
-
+     <blockquote>If the hostname is misconfigured to not be fully qualified ( i.e. hostname returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering with the RM using only "foo". This can create problems if DNS cannot resolve the hostname properly. 
+
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting locality matches when allocating containers based on block locations. </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-846">YARN-846</a>.
      Major sub-task reported by Jian He and fixed by Jian He <br>
@@ -96,24 +96,24 @@ Furthermore, HDFS uses fully qualified h
 <li> <a href="https://issues.apache.org/jira/browse/YARN-839">YARN-839</a>.
      Minor bug reported by Chuan Liu and fixed by Chuan Liu <br>
      <b>TestContainerLaunch.testContainerEnvVariables fails on Windows</b><br>
-     <blockquote>The unit test case fails on Windows due to job id or container id was not printed out as part of the container script. Later, the test tries to read the pid from output of the file, and fails.
-
-Exception in trunk:
-{noformat}
-Running org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
-Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.903 sec &lt;&lt;&lt; FAILURE!
-testContainerEnvVariables(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)  Time elapsed: 1307 sec  &lt;&lt;&lt; ERROR!
-java.lang.NullPointerException
-        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:278)
-        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
-        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
-        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
-        at java.lang.reflect.Method.invoke(Method.java:597)
-        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
-        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
-        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
-        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
-        at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
+     <blockquote>The unit test case fails on Windows due to job id or container id was not printed out as part of the container script. Later, the test tries to read the pid from output of the file, and fails.
+
+Exception in trunk:
+{noformat}
+Running org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
+Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.903 sec &lt;&lt;&lt; FAILURE!
+testContainerEnvVariables(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)  Time elapsed: 1307 sec  &lt;&lt;&lt; ERROR!
+java.lang.NullPointerException
+        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:278)
+        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
+        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
+        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
+        at java.lang.reflect.Method.invoke(Method.java:597)
+        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
+        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
+        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
+        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
+        at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
 {noformat}</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-837">YARN-837</a>.
      Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
@@ -154,7 +154,7 @@ java.lang.NullPointerException
 <li> <a href="https://issues.apache.org/jira/browse/YARN-824">YARN-824</a>.
      Major sub-task reported by Jian He and fixed by Jian He <br>
      <b>Add  static factory to yarn client lib interface and change it to abstract class</b><br>
-     <blockquote>Do this for AMRMClient, NMClient, YarnClient. and annotate its impl as private.
+     <blockquote>Do this for AMRMClient, NMClient, YarnClient. and annotate its impl as private.
 The purpose is not to expose impl</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-823">YARN-823</a>.
      Major sub-task reported by Jian He and fixed by Jian He <br>
@@ -171,73 +171,73 @@ The purpose is not to expose impl</block
 <li> <a href="https://issues.apache.org/jira/browse/YARN-812">YARN-812</a>.
      Major bug reported by Ramya Sunil and fixed by Siddharth Seth <br>
      <b>Enabling app summary logs causes 'FileNotFound' errors</b><br>
-     <blockquote>RM app summary logs have been enabled as per the default config:
-
-{noformat}
-#
-# Yarn ResourceManager Application Summary Log 
-#
-# Set the ResourceManager summary log filename
-yarn.server.resourcemanager.appsummary.log.file=rm-appsummary.log
-# Set the ResourceManager summary log level and appender
-yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY
-
-# Appender for ResourceManager Application Summary Log
-# Requires the following properties to be set
-#    - hadoop.log.dir (Hadoop Log directory)
-#    - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename)
-#    - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender)
-
-log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger}
-log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false
-log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender
-log4j.appender.RMSUMMARY.File=${hadoop.log.dir}/${yarn.server.resourcemanager.appsummary.log.file}
-log4j.appender.RMSUMMARY.MaxFileSize=256MB
-log4j.appender.RMSUMMARY.MaxBackupIndex=20
-log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout
-log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
-{noformat}
-
-This however, throws errors while running commands as non-superuser:
-{noformat}
--bash-4.1$ hadoop dfs -ls /
-DEPRECATED: Use of this script to execute hdfs command is deprecated.
-Instead use the hdfs command for it.
-
-log4j:ERROR setFile(null,true) call failed.
-java.io.FileNotFoundException: /var/log/hadoop/hadoopqa/rm-appsummary.log (No such file or directory)
-        at java.io.FileOutputStream.openAppend(Native Method)
-        at java.io.FileOutputStream.&lt;init&gt;(FileOutputStream.java:192)
-        at java.io.FileOutputStream.&lt;init&gt;(FileOutputStream.java:116)
-        at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
-        at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
-        at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
-        at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
-        at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
-        at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
-        at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
-        at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
-        at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
-        at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
-        at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
-        at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
-        at org.apache.log4j.LogManager.&lt;clinit&gt;(LogManager.java:127)
-        at org.apache.log4j.Logger.getLogger(Logger.java:104)
-        at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:289)
-        at org.apache.commons.logging.impl.Log4JLogger.&lt;init&gt;(Log4JLogger.java:109)
-        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
-        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
-        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
-        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
-        at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1116)
-        at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:858)
-        at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:604)
-        at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:336)
-        at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:310)
-        at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685)
-        at org.apache.hadoop.fs.FsShell.&lt;clinit&gt;(FsShell.java:41)
-Found 1 items
-drwxr-xr-x   - hadoop   hadoop            0 2013-06-12 21:28 /user
+     <blockquote>RM app summary logs have been enabled as per the default config:
+
+{noformat}
+#
+# Yarn ResourceManager Application Summary Log 
+#
+# Set the ResourceManager summary log filename
+yarn.server.resourcemanager.appsummary.log.file=rm-appsummary.log
+# Set the ResourceManager summary log level and appender
+yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY
+
+# Appender for ResourceManager Application Summary Log
+# Requires the following properties to be set
+#    - hadoop.log.dir (Hadoop Log directory)
+#    - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename)
+#    - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender)
+
+log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger}
+log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false
+log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender
+log4j.appender.RMSUMMARY.File=${hadoop.log.dir}/${yarn.server.resourcemanager.appsummary.log.file}
+log4j.appender.RMSUMMARY.MaxFileSize=256MB
+log4j.appender.RMSUMMARY.MaxBackupIndex=20
+log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout
+log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
+{noformat}
+
+This however, throws errors while running commands as non-superuser:
+{noformat}
+-bash-4.1$ hadoop dfs -ls /
+DEPRECATED: Use of this script to execute hdfs command is deprecated.
+Instead use the hdfs command for it.
+
+log4j:ERROR setFile(null,true) call failed.
+java.io.FileNotFoundException: /var/log/hadoop/hadoopqa/rm-appsummary.log (No such file or directory)
+        at java.io.FileOutputStream.openAppend(Native Method)
+        at java.io.FileOutputStream.&lt;init&gt;(FileOutputStream.java:192)
+        at java.io.FileOutputStream.&lt;init&gt;(FileOutputStream.java:116)
+        at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
+        at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
+        at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
+        at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
+        at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
+        at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
+        at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
+        at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
+        at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)
+        at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)
+        at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
+        at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
+        at org.apache.log4j.LogManager.&lt;clinit&gt;(LogManager.java:127)
+        at org.apache.log4j.Logger.getLogger(Logger.java:104)
+        at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:289)
+        at org.apache.commons.logging.impl.Log4JLogger.&lt;init&gt;(Log4JLogger.java:109)
+        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
+        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
+        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
+        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
+        at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1116)
+        at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:858)
+        at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:604)
+        at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:336)
+        at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:310)
+        at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685)
+        at org.apache.hadoop.fs.FsShell.&lt;clinit&gt;(FsShell.java:41)
+Found 1 items
+drwxr-xr-x   - hadoop   hadoop            0 2013-06-12 21:28 /user
 {noformat}</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-806">YARN-806</a>.
      Major sub-task reported by Jian He and fixed by Jian He <br>
@@ -254,75 +254,75 @@ drwxr-xr-x   - hadoop   hadoop          
 <li> <a href="https://issues.apache.org/jira/browse/YARN-799">YARN-799</a>.
      Major bug reported by Chris Riccomini and fixed by Chris Riccomini (nodemanager)<br>
      <b>CgroupsLCEResourcesHandler tries to write to cgroup.procs</b><br>
-     <blockquote>The implementation of
-
-bq. ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java
-
-Tells the container-executor to write PIDs to cgroup.procs:
-
-{code}
-  public String getResourcesOption(ContainerId containerId) {
-    String containerName = containerId.toString();
-    StringBuilder sb = new StringBuilder("cgroups=");
-
-    if (isCpuWeightEnabled()) {
-      sb.append(pathForCgroup(CONTROLLER_CPU, containerName) + "/cgroup.procs");
-      sb.append(",");
-    }
-
-    if (sb.charAt(sb.length() - 1) == ',') {
-      sb.deleteCharAt(sb.length() - 1);
-    } 
-    return sb.toString();
-  }
-{code}
-
-Apparently, this file has not always been writeable:
-
-https://patchwork.kernel.org/patch/116146/
-http://lkml.indiana.edu/hypermail/linux/kernel/1004.1/00536.html
-https://lists.linux-foundation.org/pipermail/containers/2009-July/019679.html
-
-The RHEL version of the Linux kernel that I'm using has a CGroup module that has a non-writeable cgroup.procs file.
-
-{quote}
-$ uname -a
-Linux criccomi-ld 2.6.32-131.4.1.el6.x86_64 #1 SMP Fri Jun 10 10:54:26 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
-{quote}
-
-As a result, when the container-executor tries to run, it fails with this error message:
-
-bq.    fprintf(LOGFILE, "Failed to write pid %s (%d) to file %s - %s\n",
-
-This is because the executor is given a resource by the CgroupsLCEResourcesHandler that includes cgroup.procs, which is non-writeable:
-
-{quote}
-$ pwd 
-/cgroup/cpu/hadoop-yarn/container_1370986842149_0001_01_000001
-$ ls -l
-total 0
--r--r--r-- 1 criccomi eng 0 Jun 11 14:43 cgroup.procs
--rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_period_us
--rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_runtime_us
--rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.shares
--rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 notify_on_release
--rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 tasks
-{quote}
-
-I patched CgroupsLCEResourcesHandler to use /tasks instead of /cgroup.procs, and this appears to have fixed the problem.
-
-I can think of several potential resolutions to this ticket:
-
-1. Ignore the problem, and make people patch YARN when they hit this issue.
-2. Write to /tasks instead of /cgroup.procs for everyone
-3. Check permissioning on /cgroup.procs prior to writing to it, and fall back to /tasks.
-4. Add a config to yarn-site that lets admins specify which file to write to.
-
+     <blockquote>The implementation of
+
+bq. ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java
+
+Tells the container-executor to write PIDs to cgroup.procs:
+
+{code}
+  public String getResourcesOption(ContainerId containerId) {
+    String containerName = containerId.toString();
+    StringBuilder sb = new StringBuilder("cgroups=");
+
+    if (isCpuWeightEnabled()) {
+      sb.append(pathForCgroup(CONTROLLER_CPU, containerName) + "/cgroup.procs");
+      sb.append(",");
+    }
+
+    if (sb.charAt(sb.length() - 1) == ',') {
+      sb.deleteCharAt(sb.length() - 1);
+    } 
+    return sb.toString();
+  }
+{code}
+
+Apparently, this file has not always been writeable:
+
+https://patchwork.kernel.org/patch/116146/
+http://lkml.indiana.edu/hypermail/linux/kernel/1004.1/00536.html
+https://lists.linux-foundation.org/pipermail/containers/2009-July/019679.html
+
+The RHEL version of the Linux kernel that I'm using has a CGroup module that has a non-writeable cgroup.procs file.
+
+{quote}
+$ uname -a
+Linux criccomi-ld 2.6.32-131.4.1.el6.x86_64 #1 SMP Fri Jun 10 10:54:26 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
+{quote}
+
+As a result, when the container-executor tries to run, it fails with this error message:
+
+bq.    fprintf(LOGFILE, "Failed to write pid %s (%d) to file %s - %s\n",
+
+This is because the executor is given a resource by the CgroupsLCEResourcesHandler that includes cgroup.procs, which is non-writeable:
+
+{quote}
+$ pwd 
+/cgroup/cpu/hadoop-yarn/container_1370986842149_0001_01_000001
+$ ls -l
+total 0
+-r--r--r-- 1 criccomi eng 0 Jun 11 14:43 cgroup.procs
+-rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_period_us
+-rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_runtime_us
+-rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.shares
+-rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 notify_on_release
+-rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 tasks
+{quote}
+
+I patched CgroupsLCEResourcesHandler to use /tasks instead of /cgroup.procs, and this appears to have fixed the problem.
+
+I can think of several potential resolutions to this ticket:
+
+1. Ignore the problem, and make people patch YARN when they hit this issue.
+2. Write to /tasks instead of /cgroup.procs for everyone
+3. Check permissioning on /cgroup.procs prior to writing to it, and fall back to /tasks.
+4. Add a config to yarn-site that lets admins specify which file to write to.
+
 Thoughts?</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-795">YARN-795</a>.
      Major bug reported by Wei Yan and fixed by Wei Yan (scheduler)<br>
      <b>Fair scheduler queue metrics should subtract allocated vCores from available vCores</b><br>
-     <blockquote>The queue metrics of fair scheduler doesn't subtract allocated vCores from available vCores, causing the available vCores returned is incorrect.
+     <blockquote>The queue metrics of fair scheduler doesn't subtract allocated vCores from available vCores, causing the available vCores returned is incorrect.
 This is happening because {code}QueueMetrics.getAllocateResources(){code} doesn't return the allocated vCores.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-792">YARN-792</a>.
      Major sub-task reported by Jian He and fixed by Jian He <br>
@@ -331,43 +331,43 @@ This is happening because {code}QueueMet
 <li> <a href="https://issues.apache.org/jira/browse/YARN-789">YARN-789</a>.
      Major improvement reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (scheduler)<br>
      <b>Enable zero capabilities resource requests in fair scheduler</b><br>
-     <blockquote>Per discussion in YARN-689, reposting updated use case:
-
-1. I have a set of services co-existing with a Yarn cluster.
-
-2. These services run out of band from Yarn. They are not started as yarn containers and they don't use Yarn containers for processing.
-
-3. These services use, dynamically, different amounts of CPU and memory based on their load. They manage their CPU and memory requirements independently. In other words, depending on their load, they may require more CPU but not memory or vice-versa.
-By using YARN as RM for these services I'm able share and utilize the resources of the cluster appropriately and in a dynamic way. Yarn keeps tab of all the resources.
-
-These services run an AM that reserves resources on their behalf. When this AM gets the requested resources, the services bump up their CPU/memory utilization out of band from Yarn. If the Yarn allocations are released/preempted, the services back off on their resources utilization. By doing this, Yarn and these service correctly share the cluster resources, being Yarn RM the only one that does the overall resource bookkeeping.
-
-The services AM, not to break the lifecycle of containers, start containers in the corresponding NMs. These container processes do basically a sleep forever (i.e. sleep 10000d). They are almost not using any CPU nor memory (less than 1MB). Thus it is reasonable to assume their required CPU and memory utilization is NIL (more on hard enforcement later). Because of this almost NIL utilization of CPU and memory, it is possible to specify, when doing a request, zero as one of the dimensions (CPU or memory).
-
-The current limitation is that the increment is also the minimum. 
-
-If we set the memory increment to 1MB. When doing a pure CPU request, we would have to specify 1MB of memory. That would work. However it would allow discretionary memory requests without a desired normalization (increments of 256, 512, etc).
-
-If we set the CPU increment to 1CPU. When doing a pure memory request, we would have to specify 1CPU. CPU amounts a much smaller than memory amounts, and because we don't have fractional CPUs, it would mean that all my pure memory requests will be wasting 1 CPU thus reducing the overall utilization of the cluster.
-
-Finally, on hard enforcement. 
-
-* For CPU. Hard enforcement can be done via a cgroup cpu controller. Using an absolute minimum of a few CPU shares (ie 10) in the LinuxContainerExecutor we ensure there is enough CPU cycles to run the sleep process. This absolute minimum would only kick-in if zero is allowed, otherwise will never kick in as the shares for 1 CPU are 1024.
-
+     <blockquote>Per discussion in YARN-689, reposting updated use case:
+
+1. I have a set of services co-existing with a Yarn cluster.
+
+2. These services run out of band from Yarn. They are not started as yarn containers and they don't use Yarn containers for processing.
+
+3. These services use, dynamically, different amounts of CPU and memory based on their load. They manage their CPU and memory requirements independently. In other words, depending on their load, they may require more CPU but not memory or vice-versa.
+By using YARN as RM for these services I'm able share and utilize the resources of the cluster appropriately and in a dynamic way. Yarn keeps tab of all the resources.
+
+These services run an AM that reserves resources on their behalf. When this AM gets the requested resources, the services bump up their CPU/memory utilization out of band from Yarn. If the Yarn allocations are released/preempted, the services back off on their resources utilization. By doing this, Yarn and these service correctly share the cluster resources, being Yarn RM the only one that does the overall resource bookkeeping.
+
+The services AM, not to break the lifecycle of containers, start containers in the corresponding NMs. These container processes do basically a sleep forever (i.e. sleep 10000d). They are almost not using any CPU nor memory (less than 1MB). Thus it is reasonable to assume their required CPU and memory utilization is NIL (more on hard enforcement later). Because of this almost NIL utilization of CPU and memory, it is possible to specify, when doing a request, zero as one of the dimensions (CPU or memory).
+
+The current limitation is that the increment is also the minimum. 
+
+If we set the memory increment to 1MB. When doing a pure CPU request, we would have to specify 1MB of memory. That would work. However it would allow discretionary memory requests without a desired normalization (increments of 256, 512, etc).
+
+If we set the CPU increment to 1CPU. When doing a pure memory request, we would have to specify 1CPU. CPU amounts a much smaller than memory amounts, and because we don't have fractional CPUs, it would mean that all my pure memory requests will be wasting 1 CPU thus reducing the overall utilization of the cluster.
+
+Finally, on hard enforcement. 
+
+* For CPU. Hard enforcement can be done via a cgroup cpu controller. Using an absolute minimum of a few CPU shares (ie 10) in the LinuxContainerExecutor we ensure there is enough CPU cycles to run the sleep process. This absolute minimum would only kick-in if zero is allowed, otherwise will never kick in as the shares for 1 CPU are 1024.
+
 * For Memory. Hard enforcement is currently done by the ProcfsBasedProcessTree.java, using a minimum absolute of 1 or 2 MBs would take care of zero memory resources. And again,  this absolute minimum would only kick-in if zero is allowed, otherwise will never kick in as the increment memory is in several MBs if not 1GB.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-787">YARN-787</a>.
      Blocker sub-task reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (api)<br>
      <b>Remove resource min from Yarn client API</b><br>
-     <blockquote>Per discussions in YARN-689 and YARN-769 we should remove minimum from the API as this is a scheduler internal thing.
+     <blockquote>Per discussions in YARN-689 and YARN-769 we should remove minimum from the API as this is a scheduler internal thing.
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-782">YARN-782</a>.
      Critical improvement reported by Sandy Ryza and fixed by Sandy Ryza (nodemanager)<br>
      <b>vcores-pcores ratio functions differently from vmem-pmem ratio in misleading way </b><br>
-     <blockquote>The vcores-pcores ratio functions differently from the vmem-pmem ratio in the sense that the vcores-pcores ratio has an impact on allocations and the vmem-pmem ratio does not.
-
-If I double my vmem-pmem ratio, the only change that occurs is that my containers, after being scheduled, are less likely to be killed for using too much virtual memory.  But if I double my vcore-pcore ratio, my nodes will appear to the ResourceManager to contain double the amount of CPU space, which will affect scheduling decisions.
-
-The lack of consistency will exacerbate the already difficult problem of resource configuration.
+     <blockquote>The vcores-pcores ratio functions differently from the vmem-pmem ratio in the sense that the vcores-pcores ratio has an impact on allocations and the vmem-pmem ratio does not.
+
+If I double my vmem-pmem ratio, the only change that occurs is that my containers, after being scheduled, are less likely to be killed for using too much virtual memory.  But if I double my vcore-pcore ratio, my nodes will appear to the ResourceManager to contain double the amount of CPU space, which will affect scheduling decisions.
+
+The lack of consistency will exacerbate the already difficult problem of resource configuration.
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-781">YARN-781</a>.
      Major sub-task reported by Devaraj Das and fixed by Jian He <br>
@@ -384,18 +384,18 @@ The lack of consistency will exacerbate 
 <li> <a href="https://issues.apache.org/jira/browse/YARN-767">YARN-767</a>.
      Major bug reported by Jian He and fixed by Jian He <br>
      <b>Initialize Application status metrics  when QueueMetrics is initialized</b><br>
-     <blockquote>Applications: ResourceManager.QueueMetrics.AppsSubmitted, ResourceManager.QueueMetrics.AppsRunning, ResourceManager.QueueMetrics.AppsPending, ResourceManager.QueueMetrics.AppsCompleted, ResourceManager.QueueMetrics.AppsKilled, ResourceManager.QueueMetrics.AppsFailed
+     <blockquote>Applications: ResourceManager.QueueMetrics.AppsSubmitted, ResourceManager.QueueMetrics.AppsRunning, ResourceManager.QueueMetrics.AppsPending, ResourceManager.QueueMetrics.AppsCompleted, ResourceManager.QueueMetrics.AppsKilled, ResourceManager.QueueMetrics.AppsFailed
 For now these metrics are created only when they are needed, we want to make them be seen when QueueMetrics is initialized</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-764">YARN-764</a>.
      Major bug reported by nemon lou and fixed by nemon lou (resourcemanager)<br>
      <b>blank Used Resources on Capacity Scheduler page </b><br>
-     <blockquote>Even when there are jobs running,used resources is empty on Capacity Scheduler page for leaf queue.(I use google-chrome on windows 7.)
+     <blockquote>Even when there are jobs running,used resources is empty on Capacity Scheduler page for leaf queue.(I use google-chrome on windows 7.)
 After changing resource.java's toString method by replacing "&lt;&gt;" with "{}",this bug gets fixed.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-761">YARN-761</a>.
      Major bug reported by Vinod Kumar Vavilapalli and fixed by Zhijie Shen <br>
      <b>TestNMClientAsync fails sometimes</b><br>
-     <blockquote>See https://builds.apache.org/job/PreCommit-YARN-Build/1101//testReport/.
-
+     <blockquote>See https://builds.apache.org/job/PreCommit-YARN-Build/1101//testReport/.
+
 It passed on my machine though.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-760">YARN-760</a>.
      Major bug reported by Sandy Ryza and fixed by Niranjan Singh (nodemanager)<br>
@@ -428,8 +428,8 @@ It passed on my machine though.</blockqu
 <li> <a href="https://issues.apache.org/jira/browse/YARN-750">YARN-750</a>.
      Major sub-task reported by Arun C Murthy and fixed by Arun C Murthy <br>
      <b>Allow for black-listing resources in YARN API and Impl in CS</b><br>
-     <blockquote>YARN-392 and YARN-398 enhance scheduler api to allow for white-lists of resources.
-
+     <blockquote>YARN-392 and YARN-398 enhance scheduler api to allow for white-lists of resources.
+
 This jira is a companion to allow for black-listing (in CS).</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-749">YARN-749</a>.
      Major sub-task reported by Arun C Murthy and fixed by Arun C Murthy <br>
@@ -442,13 +442,13 @@ This jira is a companion to allow for bl
 <li> <a href="https://issues.apache.org/jira/browse/YARN-746">YARN-746</a>.
      Major sub-task reported by Steve Loughran and fixed by Steve Loughran <br>
      <b>rename Service.register() and Service.unregister() to registerServiceListener() &amp; unregisterServiceListener() respectively</b><br>
-     <blockquote>make it clear what you are registering on a {{Service}} by naming the methods {{registerServiceListener()}} &amp; {{unregisterServiceListener()}} respectively.
-
+     <blockquote>make it clear what you are registering on a {{Service}} by naming the methods {{registerServiceListener()}} &amp; {{unregisterServiceListener()}} respectively.
+
 This only affects a couple of production classes; {{Service.register()}} and is used in some of the lifecycle tests of the YARN-530. There are no tests of {{Service.unregister()}}, which is something that could be corrected.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-742">YARN-742</a>.
      Major bug reported by Kihwal Lee and fixed by Jason Lowe (nodemanager)<br>
      <b>Log aggregation causes a lot of redundant setPermission calls</b><br>
-     <blockquote>In one of our clusters, namenode RPC is spending 45% of its time on serving setPermission calls. Further investigation has revealed that most calls are redundantly made on /mapred/logs/&lt;user&gt;/logs. Also mkdirs calls are made before this.
+     <blockquote>In one of our clusters, namenode RPC is spending 45% of its time on serving setPermission calls. Further investigation has revealed that most calls are redundantly made on /mapred/logs/&lt;user&gt;/logs. Also mkdirs calls are made before this.
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-739">YARN-739</a>.
      Major sub-task reported by Siddharth Seth and fixed by Omkar Vinit Joshi <br>
@@ -465,26 +465,26 @@ This only affects a couple of production
 <li> <a href="https://issues.apache.org/jira/browse/YARN-733">YARN-733</a>.
      Major bug reported by Zhijie Shen and fixed by Zhijie Shen <br>
      <b>TestNMClient fails occasionally</b><br>
-     <blockquote>The problem happens at:
-{code}
-        // getContainerStatus can be called after stopContainer
-        try {
-          ContainerStatus status = nmClient.getContainerStatus(
-              container.getId(), container.getNodeId(),
-              container.getContainerToken());
-          assertEquals(container.getId(), status.getContainerId());
-          assertEquals(ContainerState.RUNNING, status.getState());
-          assertTrue("" + i, status.getDiagnostics().contains(
-              "Container killed by the ApplicationMaster."));
-          assertEquals(-1000, status.getExitStatus());
-        } catch (YarnRemoteException e) {
-          fail("Exception is not expected");
-        }
-{code}
-
-NMClientImpl#stopContainer returns, but container hasn't been stopped immediately. ContainerManangerImpl implements stopContainer in async style. Therefore, the container's status is in transition. NMClientImpl#getContainerStatus immediately after stopContainer will get either the RUNNING status or the COMPLETE one.
-
-There will be the similar problem wrt NMClientImpl#startContainer.
+     <blockquote>The problem happens at:
+{code}
+        // getContainerStatus can be called after stopContainer
+        try {
+          ContainerStatus status = nmClient.getContainerStatus(
+              container.getId(), container.getNodeId(),
+              container.getContainerToken());
+          assertEquals(container.getId(), status.getContainerId());
+          assertEquals(ContainerState.RUNNING, status.getState());
+          assertTrue("" + i, status.getDiagnostics().contains(
+              "Container killed by the ApplicationMaster."));
+          assertEquals(-1000, status.getExitStatus());
+        } catch (YarnRemoteException e) {
+          fail("Exception is not expected");
+        }
+{code}
+
+NMClientImpl#stopContainer returns, but container hasn't been stopped immediately. ContainerManangerImpl implements stopContainer in async style. Therefore, the container's status is in transition. NMClientImpl#getContainerStatus immediately after stopContainer will get either the RUNNING status or the COMPLETE one.
+
+There will be the similar problem wrt NMClientImpl#startContainer.
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-731">YARN-731</a>.
      Major sub-task reported by Siddharth Seth and fixed by Zhijie Shen <br>
@@ -493,7 +493,7 @@ There will be the similar problem wrt NM
 <li> <a href="https://issues.apache.org/jira/browse/YARN-726">YARN-726</a>.
      Critical bug reported by Siddharth Seth and fixed by Mayank Bansal <br>
      <b>Queue, FinishTime fields broken on RM UI</b><br>
-     <blockquote>The queue shows up as "Invalid Date"
+     <blockquote>The queue shows up as "Invalid Date"
 Finish Time shows up as a Long value.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-724">YARN-724</a>.
      Major sub-task reported by Jian He and fixed by Jian He <br>
@@ -510,8 +510,8 @@ Finish Time shows up as a Long value.</b
 <li> <a href="https://issues.apache.org/jira/browse/YARN-717">YARN-717</a>.
      Major sub-task reported by Jian He and fixed by Jian He <br>
      <b>Copy BuilderUtil methods into token-related records</b><br>
-     <blockquote>This is separated from YARN-711,as after changing yarn.api.token from interface to abstract class, eg: ClientTokenPBImpl has to extend two classes: both TokenPBImpl and ClientToken abstract class, which is not allowed in JAVA.
-
+     <blockquote>This is separated from YARN-711,as after changing yarn.api.token from interface to abstract class, eg: ClientTokenPBImpl has to extend two classes: both TokenPBImpl and ClientToken abstract class, which is not allowed in JAVA.
+
 We may remove the ClientToken/ContainerToken/DelegationToken interface and just use the common Token interface </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-716">YARN-716</a>.
      Major task reported by Siddharth Seth and fixed by Siddharth Seth <br>
@@ -520,69 +520,69 @@ We may remove the ClientToken/ContainerT
 <li> <a href="https://issues.apache.org/jira/browse/YARN-715">YARN-715</a>.
      Major bug reported by Siddharth Seth and fixed by Vinod Kumar Vavilapalli <br>
      <b>TestDistributedShell and TestUnmanagedAMLauncher are failing</b><br>
-     <blockquote>Tests are timing out. Looks like this is related to YARN-617.
-{code}
-2013-05-21 17:40:23,693 ERROR [IPC Server handler 0 on 54024] containermanager.ContainerManagerImpl (ContainerManagerImpl.java:authorizeRequest(412)) - Unauthorized request to start container.
-Expected containerId: user Found: container_1369183214008_0001_01_000001
-2013-05-21 17:40:23,694 ERROR [IPC Server handler 0 on 54024] security.UserGroupInformation (UserGroupInformation.java:doAs(1492)) - PriviledgedActionException as:user (auth:SIMPLE) cause:org.apache.hado
-Expected containerId: user Found: container_1369183214008_0001_01_000001
-2013-05-21 17:40:23,695 INFO  [IPC Server handler 0 on 54024] ipc.Server (Server.java:run(1864)) - IPC Server handler 0 on 54024, call org.apache.hadoop.yarn.api.ContainerManagerPB.startContainer from 10.
-Expected containerId: user Found: container_1369183214008_0001_01_000001
-org.apache.hadoop.yarn.exceptions.YarnRemoteException: Unauthorized request to start container.
-Expected containerId: user Found: container_1369183214008_0001_01_000001
-  at org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:43)
-  at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.authorizeRequest(ContainerManagerImpl.java:413)
-  at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.startContainer(ContainerManagerImpl.java:440)
-  at org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagerPBServiceImpl.startContainer(ContainerManagerPBServiceImpl.java:72)
-  at org.apache.hadoop.yarn.proto.ContainerManager$ContainerManagerService$2.callBlockingMethod(ContainerManager.java:83)
-  at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
+     <blockquote>Tests are timing out. Looks like this is related to YARN-617.
+{code}
+2013-05-21 17:40:23,693 ERROR [IPC Server handler 0 on 54024] containermanager.ContainerManagerImpl (ContainerManagerImpl.java:authorizeRequest(412)) - Unauthorized request to start container.
+Expected containerId: user Found: container_1369183214008_0001_01_000001
+2013-05-21 17:40:23,694 ERROR [IPC Server handler 0 on 54024] security.UserGroupInformation (UserGroupInformation.java:doAs(1492)) - PriviledgedActionException as:user (auth:SIMPLE) cause:org.apache.hado
+Expected containerId: user Found: container_1369183214008_0001_01_000001
+2013-05-21 17:40:23,695 INFO  [IPC Server handler 0 on 54024] ipc.Server (Server.java:run(1864)) - IPC Server handler 0 on 54024, call org.apache.hadoop.yarn.api.ContainerManagerPB.startContainer from 10.
+Expected containerId: user Found: container_1369183214008_0001_01_000001
+org.apache.hadoop.yarn.exceptions.YarnRemoteException: Unauthorized request to start container.
+Expected containerId: user Found: container_1369183214008_0001_01_000001
+  at org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:43)
+  at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.authorizeRequest(ContainerManagerImpl.java:413)
+  at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.startContainer(ContainerManagerImpl.java:440)
+  at org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagerPBServiceImpl.startContainer(ContainerManagerPBServiceImpl.java:72)
+  at org.apache.hadoop.yarn.proto.ContainerManager$ContainerManagerService$2.callBlockingMethod(ContainerManager.java:83)
+  at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
 {code}</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-714">YARN-714</a>.
      Major sub-task reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
      <b>AMRM protocol changes for sending NMToken list</b><br>
-     <blockquote>NMToken will be sent to AM on allocate call if
-1) AM doesn't already have NMToken for the underlying NM
-2) Key rolled over on RM and AM gets new container on the same NM.
+     <blockquote>NMToken will be sent to AM on allocate call if
+1) AM doesn't already have NMToken for the underlying NM
+2) Key rolled over on RM and AM gets new container on the same NM.
 On allocate call RM will send a consolidated list of all required NMTokens.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-711">YARN-711</a>.
      Major sub-task reported by Vinod Kumar Vavilapalli and fixed by Jian He <br>
      <b>Copy BuilderUtil methods into individual records</b><br>
-     <blockquote>BuilderUtils is one giant utils class which has all the factory methods needed for creating records. It is painful for users to figure out how to create records. We are better off having the factories in each record, that way users can easily create records.
-
+     <blockquote>BuilderUtils is one giant utils class which has all the factory methods needed for creating records. It is painful for users to figure out how to create records. We are better off having the factories in each record, that way users can easily create records.
+
 As a first step, we should just copy all the factory methods into individual classes, deprecate BuilderUtils and then slowly move all code off BuilderUtils.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-708">YARN-708</a>.
      Major task reported by Siddharth Seth and fixed by Siddharth Seth <br>
      <b>Move RecordFactory classes to hadoop-yarn-api, miscellaneous fixes to the interfaces</b><br>
-     <blockquote>This is required for additional changes in YARN-528. 
+     <blockquote>This is required for additional changes in YARN-528. 
 Some of the interfaces could use some cleanup as well - they shouldn't be declaring YarnException (Runtime) in their signature.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-706">YARN-706</a>.
      Major bug reported by Zhijie Shen and fixed by Zhijie Shen <br>
      <b>Race Condition in TestFSDownload</b><br>
-     <blockquote>See the test failure in YARN-695
-
+     <blockquote>See the test failure in YARN-695
+
 https://builds.apache.org/job/PreCommit-YARN-Build/957//testReport/org.apache.hadoop.yarn.util/TestFSDownload/testDownloadPatternJar/</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-700">YARN-700</a>.
      Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
      <b>TestInfoBlock fails on Windows because of line ending missmatch</b><br>
-     <blockquote>Exception:
-{noformat}
-Running org.apache.hadoop.yarn.webapp.view.TestInfoBlock
-Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.962 sec &lt;&lt;&lt; FAILURE!
-testMultilineInfoBlock(org.apache.hadoop.yarn.webapp.view.TestInfoBlock)  Time elapsed: 873 sec  &lt;&lt;&lt; FAILURE!
-java.lang.AssertionError: 
-	at org.junit.Assert.fail(Assert.java:91)
-	at org.junit.Assert.assertTrue(Assert.java:43)
-	at org.junit.Assert.assertTrue(Assert.java:54)
-	at org.apache.hadoop.yarn.webapp.view.TestInfoBlock.testMultilineInfoBlock(TestInfoBlock.java:79)
-	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
-	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
-	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
-	at java.lang.reflect.Method.invoke(Method.java:597)
-	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
-	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
-	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
-	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
-	at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
+     <blockquote>Exception:
+{noformat}
+Running org.apache.hadoop.yarn.webapp.view.TestInfoBlock
+Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.962 sec &lt;&lt;&lt; FAILURE!
+testMultilineInfoBlock(org.apache.hadoop.yarn.webapp.view.TestInfoBlock)  Time elapsed: 873 sec  &lt;&lt;&lt; FAILURE!
+java.lang.AssertionError: 
+	at org.junit.Assert.fail(Assert.java:91)
+	at org.junit.Assert.assertTrue(Assert.java:43)
+	at org.junit.Assert.assertTrue(Assert.java:54)
+	at org.apache.hadoop.yarn.webapp.view.TestInfoBlock.testMultilineInfoBlock(TestInfoBlock.java:79)
+	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
+	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
+	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
+	at java.lang.reflect.Method.invoke(Method.java:597)
+	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
+	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
+	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
+	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
+	at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
 {noformat}</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-695">YARN-695</a>.
      Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
@@ -591,28 +591,28 @@ java.lang.AssertionError: 
 <li> <a href="https://issues.apache.org/jira/browse/YARN-694">YARN-694</a>.
      Major bug reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
      <b>Start using NMTokens to authenticate all communication with NM</b><br>
-     <blockquote>AM uses the NMToken to authenticate all the AM-NM communication.
-NM will validate NMToken in below manner
-* If NMToken is using current or previous master key then the NMToken is valid. In this case it will update its cache with this key corresponding to appId.
-* If NMToken is using the master key which is present in NM's cache corresponding to AM's appId then it will be validated based on this.
-* If NMToken is invalid then NM will reject AM calls.
-
-Modification for ContainerToken
-* At present RPC validates AM-NM communication based on ContainerToken. It will be replaced with NMToken. Also now onwards AM will use NMToken per NM (replacing earlier behavior of ContainerToken per container per NM).
-* startContainer in case of Secured environment is using ContainerToken from UGI YARN-617; however after this it will use it from the payload (Container).
+     <blockquote>AM uses the NMToken to authenticate all the AM-NM communication.
+NM will validate NMToken in below manner
+* If NMToken is using current or previous master key then the NMToken is valid. In this case it will update its cache with this key corresponding to appId.
+* If NMToken is using the master key which is present in NM's cache corresponding to AM's appId then it will be validated based on this.
+* If NMToken is invalid then NM will reject AM calls.
+
+Modification for ContainerToken
+* At present RPC validates AM-NM communication based on ContainerToken. It will be replaced with NMToken. Also now onwards AM will use NMToken per NM (replacing earlier behavior of ContainerToken per container per NM).
+* startContainer in case of Secured environment is using ContainerToken from UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's container start request.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-693">YARN-693</a>.
      Major bug reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
      <b>Sending NMToken to AM on allocate call</b><br>
-     <blockquote>This is part of YARN-613.
-As per the updated design, AM will receive per NM, NMToken in following scenarios
-* AM is receiving first container on underlying NM.
-* AM is receiving container on underlying NM after either NM or RM rebooted.
-** After RM reboot, as RM doesn't remember (persist) the information about keys issued per AM per NM, it will reissue tokens in case AM gets new container on underlying NM. However on NM side NM will still retain older token until it receives new token to support long running jobs (in work preserving environment).
-** After NM reboot, RM will delete the token information corresponding to that AM for all AMs.
-* AM is receiving container on underlying NM after NMToken master key is rolled over on RM side.
-In all the cases if AM receives new NMToken then it is suppose to store it for future NM communication until it receives a new one.
-
+     <blockquote>This is part of YARN-613.
+As per the updated design, AM will receive per NM, NMToken in following scenarios
+* AM is receiving first container on underlying NM.
+* AM is receiving container on underlying NM after either NM or RM rebooted.
+** After RM reboot, as RM doesn't remember (persist) the information about keys issued per AM per NM, it will reissue tokens in case AM gets new container on underlying NM. However on NM side NM will still retain older token until it receives new token to support long running jobs (in work preserving environment).
+** After NM reboot, RM will delete the token information corresponding to that AM for all AMs.
+* AM is receiving container on underlying NM after NMToken master key is rolled over on RM side.
+In all the cases if AM receives new NMToken then it is suppose to store it for future NM communication until it receives a new one.
+
 AMRMClient should expose these NMToken to client. </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-692">YARN-692</a>.
      Major bug reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
@@ -621,8 +621,8 @@ AMRMClient should expose these NMToken t
 <li> <a href="https://issues.apache.org/jira/browse/YARN-690">YARN-690</a>.
      Blocker bug reported by Daryn Sharp and fixed by Daryn Sharp (resourcemanager)<br>
      <b>RM exits on token cancel/renew problems</b><br>
-     <blockquote>The DelegationTokenRenewer thread is critical to the RM.  When a non-IOException occurs, the thread calls System.exit to prevent the RM from running w/o the thread.  It should be exiting only on non-RuntimeExceptions.
-
+     <blockquote>The DelegationTokenRenewer thread is critical to the RM.  When a non-IOException occurs, the thread calls System.exit to prevent the RM from running w/o the thread.  It should be exiting only on non-RuntimeExceptions.
+
 The problem is especially bad in 23 because the yarn protobuf layer converts IOExceptions into UndeclaredThrowableExceptions (RuntimeException) which causes the renewer to abort the process.  An UnknownHostException takes down the RM...</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-686">YARN-686</a>.
      Major sub-task reported by Sandy Ryza and fixed by Sandy Ryza (api)<br>
@@ -655,9 +655,9 @@ The problem is especially bad in 23 beca
 <li> <a href="https://issues.apache.org/jira/browse/YARN-646">YARN-646</a>.
      Major bug reported by Dapeng Sun and fixed by Dapeng Sun (documentation)<br>
      <b>Some issues in Fair Scheduler's document</b><br>
-     <blockquote>Issues are found in the doc page for Fair Scheduler http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html:
-1.In the section &#8220;Configuration&#8221;, It contains two properties named &#8220;yarn.scheduler.fair.minimum-allocation-mb&#8221;, the second one should be &#8220;yarn.scheduler.fair.maximum-allocation-mb&#8221;
-2.In the section &#8220;Allocation file format&#8221;, the document tells &#8220; The format contains three types of elements&#8221;, but it lists four types of elements following that.
+     <blockquote>Issues are found in the doc page for Fair Scheduler http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html:
+1.In the section &#8220;Configuration&#8221;, It contains two properties named &#8220;yarn.scheduler.fair.minimum-allocation-mb&#8221;, the second one should be &#8220;yarn.scheduler.fair.maximum-allocation-mb&#8221;
+2.In the section &#8220;Allocation file format&#8221;, the document tells &#8220; The format contains three types of elements&#8221;, but it lists four types of elements following that.
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-645">YARN-645</a>.
      Major bug reported by Jian He and fixed by Jian He <br>
@@ -722,8 +722,8 @@ The problem is especially bad in 23 beca
 <li> <a href="https://issues.apache.org/jira/browse/YARN-617">YARN-617</a>.
      Minor sub-task reported by Vinod Kumar Vavilapalli and fixed by Omkar Vinit Joshi <br>
      <b>In unsercure mode, AM can fake resource requirements </b><br>
-     <blockquote>Without security, it is impossible to completely avoid AMs faking resources. We can at the least make it as difficult as possible by using the same container tokens and the RM-NM shared key mechanism over unauthenticated RM-NM channel.
-
+     <blockquote>Without security, it is impossible to completely avoid AMs faking resources. We can at the least make it as difficult as possible by using the same container tokens and the RM-NM shared key mechanism over unauthenticated RM-NM channel.
+
 In the minimum, this will avoid accidental bugs in AMs in unsecure mode.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-615">YARN-615</a>.
      Major sub-task reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli <br>
@@ -740,12 +740,12 @@ In the minimum, this will avoid accident
 <li> <a href="https://issues.apache.org/jira/browse/YARN-605">YARN-605</a>.
      Major bug reported by Hitesh Shah and fixed by Hitesh Shah <br>
      <b>Failing unit test in TestNMWebServices when using git for source control </b><br>
-     <blockquote>Failed tests:   testNode(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789
-  testNodeSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789
-  testNodeDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789
-  testNodeInfo(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789
-  testNodeInfoSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789
-  testNodeInfoDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789
+     <blockquote>Failed tests:   testNode(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789
+  testNodeSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789
+  testNodeDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789
+  testNodeInfo(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789
+  testNodeInfoSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789
+  testNodeInfoDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789
   testSingleNodesXML(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-600">YARN-600</a>.
      Major improvement reported by Sandy Ryza and fixed by Sandy Ryza (resourcemanager , scheduler)<br>
@@ -754,10 +754,10 @@ In the minimum, this will avoid accident
 <li> <a href="https://issues.apache.org/jira/browse/YARN-599">YARN-599</a>.
      Major bug reported by Zhijie Shen and fixed by Zhijie Shen <br>
      <b>Refactoring submitApplication in ClientRMService and RMAppManager</b><br>
-     <blockquote>Currently, ClientRMService#submitApplication call RMAppManager#handle, and consequently call RMAppMangager#submitApplication directly, though the code looks like scheduling an APP_SUBMIT event.
-
-In addition, the validation code before creating an RMApp instance is not well organized. Ideally, the dynamic validation, which depends on the RM's configuration, should be put in RMAppMangager#submitApplication. RMAppMangager#submitApplication is called by ClientRMService#submitApplication and RMAppMangager#recover. Since the configuration may be changed after RM restarts, the validation needs to be done again even in recovery mode. Therefore, resource request validation, which based on min/max resource limits, should be moved from ClientRMService#submitApplication to RMAppMangager#submitApplication. On the other hand, the static validation, which is independent of the RM's configuration should be put in ClientRMService#submitApplication, because it is only need to be done once during the first submission.
-
+     <blockquote>Currently, ClientRMService#submitApplication call RMAppManager#handle, and consequently call RMAppMangager#submitApplication directly, though the code looks like scheduling an APP_SUBMIT event.
+
+In addition, the validation code before creating an RMApp instance is not well organized. Ideally, the dynamic validation, which depends on the RM's configuration, should be put in RMAppMangager#submitApplication. RMAppMangager#submitApplication is called by ClientRMService#submitApplication and RMAppMangager#recover. Since the configuration may be changed after RM restarts, the validation needs to be done again even in recovery mode. Therefore, resource request validation, which based on min/max resource limits, should be moved from ClientRMService#submitApplication to RMAppMangager#submitApplication. On the other hand, the static validation, which is independent of the RM's configuration should be put in ClientRMService#submitApplication, because it is only need to be done once during the first submission.
+
 Furthermore, try-catch flow in RMAppMangager#submitApplication has a flaw. RMAppMangager#submitApplication has a flaw is not synchronized. If two application submissions with the same application ID enter the function, and one progresses to the completion of RMApp instantiation, and the other progresses the completion of putting the RMApp instance into rmContext, the slower submission will cause an exception due to the duplicate application ID. However, the exception will cause the RMApp instance already in rmContext (belongs to the faster submission) being rejected with the current code flow.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-598">YARN-598</a>.
      Major improvement reported by Sandy Ryza and fixed by Sandy Ryza (resourcemanager , scheduler)<br>
@@ -766,18 +766,18 @@ Furthermore, try-catch flow in RMAppMang
 <li> <a href="https://issues.apache.org/jira/browse/YARN-597">YARN-597</a>.
      Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
      <b>TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools</b><br>
-     <blockquote>{{testDownloadArchive}}, {{testDownloadPatternJar}} and {{testDownloadArchiveZip}} fail with the similar Shell ExitCodeException:
-
-{code}
-testDownloadArchiveZip(org.apache.hadoop.yarn.util.TestFSDownload)  Time elapsed: 480 sec  &lt;&lt;&lt; ERROR!
-org.apache.hadoop.util.Shell$ExitCodeException: bash: line 0: cd: /D:/svn/t/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/TestFSDownload: No such file or directory
-gzip: 1: No such file or directory
-
-	at org.apache.hadoop.util.Shell.runCommand(Shell.java:377)
-	at org.apache.hadoop.util.Shell.run(Shell.java:292)
-	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:497)
-	at org.apache.hadoop.yarn.util.TestFSDownload.createZipFile(TestFSDownload.java:225)
-	at org.apache.hadoop.yarn.util.TestFSDownload.testDownloadArchiveZip(TestFSDownload.java:503)
+     <blockquote>{{testDownloadArchive}}, {{testDownloadPatternJar}} and {{testDownloadArchiveZip}} fail with the similar Shell ExitCodeException:
+
+{code}
+testDownloadArchiveZip(org.apache.hadoop.yarn.util.TestFSDownload)  Time elapsed: 480 sec  &lt;&lt;&lt; ERROR!
+org.apache.hadoop.util.Shell$ExitCodeException: bash: line 0: cd: /D:/svn/t/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/TestFSDownload: No such file or directory
+gzip: 1: No such file or directory
+
+	at org.apache.hadoop.util.Shell.runCommand(Shell.java:377)
+	at org.apache.hadoop.util.Shell.run(Shell.java:292)
+	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:497)
+	at org.apache.hadoop.yarn.util.TestFSDownload.createZipFile(TestFSDownload.java:225)
+	at org.apache.hadoop.yarn.util.TestFSDownload.testDownloadArchiveZip(TestFSDownload.java:503)
 {code}</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-595">YARN-595</a>.
      Major sub-task reported by Sandy Ryza and fixed by Sandy Ryza (scheduler)<br>
@@ -838,32 +838,32 @@ gzip: 1: No such file or directory
 <li> <a href="https://issues.apache.org/jira/browse/YARN-571">YARN-571</a>.
      Major sub-task reported by Hitesh Shah and fixed by Omkar Vinit Joshi <br>
      <b>User should not be part of ContainerLaunchContext</b><br>
-     <blockquote>Today, a user is expected to set the user name in the CLC when either submitting an application or launching a container from the AM. This does not make sense as the user can/has been identified by the RM as part of the RPC layer.
-
-Solution would be to move the user information into either the Container object or directly into the ContainerToken which can then be used by the NM to launch the container. This user information would set into the container by the RM.
-
+     <blockquote>Today, a user is expected to set the user name in the CLC when either submitting an application or launching a container from the AM. This does not make sense as the user can/has been identified by the RM as part of the RPC layer.
+
+Solution would be to move the user information into either the Container object or directly into the ContainerToken which can then be used by the NM to launch the container. This user information would set into the container by the RM.
+
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-568">YARN-568</a>.
      Major improvement reported by Carlo Curino and fixed by Carlo Curino (scheduler)<br>
      <b>FairScheduler: support for work-preserving preemption </b><br>
-     <blockquote>In the attached patch, we modified  the FairScheduler to substitute its preemption-by-killling with a work-preserving version of preemption (followed by killing if the AMs do not respond quickly enough). This should allows to run preemption checking more often, but kill less often (proper tuning to be investigated).  Depends on YARN-567 and YARN-45, is related to YARN-569.
+     <blockquote>In the attached patch, we modified  the FairScheduler to substitute its preemption-by-killling with a work-preserving version of preemption (followed by killing if the AMs do not respond quickly enough). This should allows to run preemption checking more often, but kill less often (proper tuning to be investigated).  Depends on YARN-567 and YARN-45, is related to YARN-569.
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-567">YARN-567</a>.
      Major sub-task reported by Carlo Curino and fixed by Carlo Curino (resourcemanager)<br>
      <b>RM changes to support preemption for FairScheduler and CapacityScheduler</b><br>
-     <blockquote>A common tradeoff in scheduling jobs is between keeping the cluster busy and enforcing capacity/fairness properties. FairScheduler and CapacityScheduler takes opposite stance on how to achieve this. 
-
-The FairScheduler, leverages task-killing to quickly reclaim resources from currently running jobs and redistributing them among new jobs, thus keeping the cluster busy but waste useful work. The CapacityScheduler is typically tuned
-to limit the portion of the cluster used by each queue so that the likelihood of violating capacity is low, thus never wasting work, but risking to keep the cluster underutilized or have jobs waiting to obtain their rightful capacity. 
-
-By introducing the notion of a work-preserving preemption we can remove this tradeoff.  This requires a protocol for preemption (YARN-45), and ApplicationMasters that can answer to preemption  efficiently (e.g., by saving their intermediate state, this will be posted for MapReduce in a separate JIRA soon), together with a scheduler that can issues preemption requests (discussed in separate JIRAs YARN-568 and YARN-569).
-
-The changes we track with this JIRA are common to FairScheduler and CapacityScheduler, and are mostly propagation of preemption decisions through the ApplicationMastersService.
+     <blockquote>A common tradeoff in scheduling jobs is between keeping the cluster busy and enforcing capacity/fairness properties. FairScheduler and CapacityScheduler takes opposite stance on how to achieve this. 
+
+The FairScheduler, leverages task-killing to quickly reclaim resources from currently running jobs and redistributing them among new jobs, thus keeping the cluster busy but waste useful work. The CapacityScheduler is typically tuned
+to limit the portion of the cluster used by each queue so that the likelihood of violating capacity is low, thus never wasting work, but risking to keep the cluster underutilized or have jobs waiting to obtain their rightful capacity. 
+
+By introducing the notion of a work-preserving preemption we can remove this tradeoff.  This requires a protocol for preemption (YARN-45), and ApplicationMasters that can answer to preemption  efficiently (e.g., by saving their intermediate state, this will be posted for MapReduce in a separate JIRA soon), together with a scheduler that can issues preemption requests (discussed in separate JIRAs YARN-568 and YARN-569).
+
+The changes we track with this JIRA are common to FairScheduler and CapacityScheduler, and are mostly propagation of preemption decisions through the ApplicationMastersService.
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-563">YARN-563</a>.
      Major sub-task reported by Thomas Weise and fixed by Mayank Bansal <br>
      <b>Add application type to ApplicationReport </b><br>
-     <blockquote>This field is needed to distinguish different types of applications (app master implementations). For example, we may run applications of type XYZ in a cluster alongside MR and would like to filter applications by type.
+     <blockquote>This field is needed to distinguish different types of applications (app master implementations). For example, we may run applications of type XYZ in a cluster alongside MR and would like to filter applications by type.
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-562">YARN-562</a>.
      Major sub-task reported by Jian He and fixed by Jian He (resourcemanager)<br>
@@ -872,10 +872,10 @@ The changes we track with this JIRA are 
 <li> <a href="https://issues.apache.org/jira/browse/YARN-561">YARN-561</a>.
      Major sub-task reported by Hitesh Shah and fixed by Xuan Gong <br>
      <b>Nodemanager should set some key information into the environment of every container that it launches.</b><br>
-     <blockquote>Information such as containerId, nodemanager hostname, nodemanager port is not set in the environment when any container is launched. 
-
-For an AM, the RM does all of this for it but for a container launched by an application, all of the above need to be set by the ApplicationMaster. 
-
+     <blockquote>Information such as containerId, nodemanager hostname, nodemanager port is not set in the environment when any container is launched. 
+
+For an AM, the RM does all of this for it but for a container launched by an application, all of the above need to be set by the ApplicationMaster. 
+
 At the minimum, container id would be a useful piece of information. If the container wishes to talk to its local NM, the nodemanager related information would also come in handy. </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-557">YARN-557</a>.
      Major bug reported by Chris Nauroth and fixed by Chris Nauroth (applications)<br>
@@ -884,24 +884,24 @@ At the minimum, container id would be a 
 <li> <a href="https://issues.apache.org/jira/browse/YARN-553">YARN-553</a>.
      Minor sub-task reported by Harsh J and fixed by Karthik Kambatla (client)<br>
      <b>Have YarnClient generate a directly usable ApplicationSubmissionContext</b><br>
-     <blockquote>Right now, we're doing multiple steps to create a relevant ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
-
-{code}
-    GetNewApplicationResponse newApp = yarnClient.getNewApplication();
-    ApplicationId appId = newApp.getApplicationId();
-
-    ApplicationSubmissionContext appContext = Records.newRecord(ApplicationSubmissionContext.class);
-
-    appContext.setApplicationId(appId);
-{code}
-
-A simplified way may be to have the GetNewApplicationResponse itself provide a helper method that builds a usable ApplicationSubmissionContext for us. Something like:
-
-{code}
-GetNewApplicationResponse newApp = yarnClient.getNewApplication();
-ApplicationSubmissionContext appContext = newApp.generateApplicationSubmissionContext();
-{code}
-
+     <blockquote>Right now, we're doing multiple steps to create a relevant ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
+
+{code}
+    GetNewApplicationResponse newApp = yarnClient.getNewApplication();
+    ApplicationId appId = newApp.getApplicationId();
+
+    ApplicationSubmissionContext appContext = Records.newRecord(ApplicationSubmissionContext.class);
+
+    appContext.setApplicationId(appId);
+{code}
+
+A simplified way may be to have the GetNewApplicationResponse itself provide a helper method that builds a usable ApplicationSubmissionContext for us. Something like:
+
+{code}
+GetNewApplicationResponse newApp = yarnClient.getNewApplication();
+ApplicationSubmissionContext appContext = newApp.generateApplicationSubmissionContext();
+{code}
+
 [The above method can also take an arg for the container launch spec, or perhaps pre-load defaults like min-resource, etc. in the returned object, aside of just associating the application ID automatically.]</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-549">YARN-549</a>.
      Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
@@ -914,39 +914,39 @@ ApplicationSubmissionContext appContext 
 <li> <a href="https://issues.apache.org/jira/browse/YARN-547">YARN-547</a>.
      Major sub-task reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
      <b>Race condition in Public / Private Localizer may result into resource getting downloaded again</b><br>
-     <blockquote>Public Localizer :
-At present when multiple containers try to request a localized resource 
-* If the resource is not present then first it is created and Resource Localization starts ( LocalizedResource is in DOWNLOADING state)
-* Now if in this state multiple ResourceRequestEvents arrive then ResourceLocalizationEvents are sent for all of them.
-
-Most of the times it is not resulting into a duplicate resource download but there is a race condition present there. Inside ResourceLocalization (for public download) all the requests are added to local attempts map. If a new request comes in then first it is checked in this map before a new download starts for the same. For the current download the request will be there in the map. Now if a same resource request comes in then it will rejected (i.e. resource is getting downloaded already). However if the current download completes then the request will be removed from this local map. Now after this removal if the LocalizerRequestEvent comes in then as it is not present in local map the resource will be downloaded again.
-
-PrivateLocalizer :
-Here a different but similar race condition is present.
-* Here inside findNextResource method call; each LocalizerRunner tries to grab a lock on LocalizerResource. If the lock is not acquired then it will keep trying until the resource state changes to LOCALIZED. This lock will be released by the LocalizerRunner when download completes.
-* Now if another ContainerLocalizer tries to grab the lock on a resource before LocalizedResource state changes to LOCALIZED then resource will be downloaded again.
-
+     <blockquote>Public Localizer :
+At present when multiple containers try to request a localized resource 
+* If the resource is not present then first it is created and Resource Localization starts ( LocalizedResource is in DOWNLOADING state)
+* Now if in this state multiple ResourceRequestEvents arrive then ResourceLocalizationEvents are sent for all of them.
+
+Most of the times it is not resulting into a duplicate resource download but there is a race condition present there. Inside ResourceLocalization (for public download) all the requests are added to local attempts map. If a new request comes in then first it is checked in this map before a new download starts for the same. For the current download the request will be there in the map. Now if a same resource request comes in then it will rejected (i.e. resource is getting downloaded already). However if the current download completes then the request will be removed from this local map. Now after this removal if the LocalizerRequestEvent comes in then as it is not present in local map the resource will be downloaded again.
+
+PrivateLocalizer :
+Here a different but similar race condition is present.
+* Here inside findNextResource method call; each LocalizerRunner tries to grab a lock on LocalizerResource. If the lock is not acquired then it will keep trying until the resource state changes to LOCALIZED. This lock will be released by the LocalizerRunner when download completes.
+* Now if another ContainerLocalizer tries to grab the lock on a resource before LocalizedResource state changes to LOCALIZED then resource will be downloaded again.
+
 At both the places the root cause of this is that all the threads try to acquire the lock on resource however current state of the LocalizedResource is not taken into consideration.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-542">YARN-542</a>.
      Major bug reported by Vinod Kumar Vavilapalli and fixed by Zhijie Shen <br>
      <b>Change the default global AM max-attempts value to be not one</b><br>
-     <blockquote>Today, the global AM max-attempts is set to 1 which is a bad choice. AM max-attempts accounts for both AM level failures as well as container crashes due to localization issue, lost nodes etc. To account for AM crashes due to problems that are not caused by user code, mainly lost nodes, we want to give AMs some retires.
-

[... 1420 lines stripped ...]