You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by ki...@apache.org on 2020/05/12 15:50:16 UTC

[hadoop] branch trunk updated: HADOOP-17035. fixed typos (timeout, interruped) (#2007)

This is an automated email from the ASF dual-hosted git repository.

kihwal pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
     new a3f945f  HADOOP-17035. fixed typos (timeout, interruped) (#2007)
a3f945f is described below

commit a3f945fb8466d461d42ce60f0bc12c96fbb2db23
Author: Elixir Kook <ju...@gmail.com>
AuthorDate: Wed May 13 00:50:04 2020 +0900

    HADOOP-17035. fixed typos (timeout, interruped) (#2007)
    
    Co-authored-by: Sungpeo Kook <el...@kakaocorp.com>
---
 .../src/test/java/org/apache/hadoop/net/TestSocketIOWithTimeout.java  | 2 +-
 .../main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java    | 2 +-
 .../src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java | 4 ++--
 .../hadoop-yarn-site/src/site/markdown/GracefulDecommission.md        | 4 ++--
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestSocketIOWithTimeout.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestSocketIOWithTimeout.java
index 272eae7..76c74a3 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestSocketIOWithTimeout.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestSocketIOWithTimeout.java
@@ -40,7 +40,7 @@ import org.slf4j.LoggerFactory;
 import static org.junit.Assert.*;
 
 /**
- * This tests timout out from SocketInputStream and
+ * This tests timeout out from SocketInputStream and
  * SocketOutputStream using pipes.
  * 
  * Normal read and write using these streams are tested by pretty much
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index d390c1e..c772d8f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -3431,7 +3431,7 @@ public class DataNode extends ReconfigurableBase
       unhealthyVolumes = volumeChecker.checkAllVolumes(data);
       lastDiskErrorCheck = Time.monotonicNow();
     } catch (InterruptedException e) {
-      LOG.error("Interruped while running disk check", e);
+      LOG.error("Interrupted while running disk check", e);
       throw new IOException("Interrupted while running disk check", e);
     }
 
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java
index 2c2ff1f..7491f21 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java
@@ -225,7 +225,7 @@ public class ClientServiceDelegate {
         try {
           Thread.sleep(2000);
         } catch (InterruptedException e1) {
-          LOG.warn("getProxy() call interruped", e1);
+          LOG.warn("getProxy() call interrupted", e1);
           throw new YarnRuntimeException(e1);
         }
         try {
@@ -239,7 +239,7 @@ public class ClientServiceDelegate {
           return checkAndGetHSProxy(null, JobState.RUNNING);
         }
       } catch (InterruptedException e) {
-        LOG.warn("getProxy() call interruped", e);
+        LOG.warn("getProxy() call interrupted", e);
         throw new YarnRuntimeException(e);
       } catch (YarnException e) {
         throw new IOException(e);
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/GracefulDecommission.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/GracefulDecommission.md
index 2e83ca2..e7ce657 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/GracefulDecommission.md
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/GracefulDecommission.md
@@ -58,7 +58,7 @@ Features
 `yarn rmadmin -refreshNodes [-g [timeout in seconds] -client|server]` notifies NodesListManager to detect and handle include and exclude hosts changes. NodesListManager loads excluded hosts from the exclude file as specified through the `yarn.resourcemanager.nodes.exclude-path` configuration in yarn-site.xml. (Note:  It is unnecessary to restart RM in case of changing the exclude-path 
 as this config will be read again for every `refreshNodes` command)
 
-The format of the file could be plain text or XML depending the extension of the file. Only the XML format supports per node timout for graceful decommissioning.
+The format of the file could be plain text or XML depending the extension of the file. Only the XML format supports per node timeout for graceful decommissioning.
 
 NodesListManager inspects and compares status of RMNodes in resource manager and the exclude list, and apply necessary actions based on following rules:
 
@@ -83,7 +83,7 @@ In case of server side timeout:
 2. Use the timeout in `yarn rmadmin -refreshNodes -g [timeout in seconds] -server|client` if specified;
 3. Use the default timeout specified through *"yarn.resourcemanager.nodemanager-graceful-decommission-timeout-secs"* configuration.
 
-In case of client side timout (see bellow):
+In case of client side timeout (see bellow):
 
 1. Only the command line parameter defined by the `-g` flag will be used. 
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org