You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2021/03/29 10:58:33 UTC

[GitHub] [hadoop] steveloughran commented on a change in pull request #2775: MAPREDUCE-7329: HadoopPipes task has failed because of the ping timeout exception

steveloughran commented on a change in pull request #2775:
URL: https://github.com/apache/hadoop/pull/2775#discussion_r595924047



##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Application.java
##########
@@ -266,4 +271,39 @@ public static String createDigest(byte[] password, String data)
     return SecureShuffleUtils.hashFromString(data, key);
   }
 
+  private class PingSocketCleaner extends Thread {
+    PingSocketCleaner(String name) {
+      super(name);
+    }
+
+    @Override
+    public void run() {
+      LOG.info("PingSocketCleaner started...");
+      while (true) {
+        Socket clientSocket = null;
+        try {
+          clientSocket = serverSocket.accept();
+          if (LOG.isDebugEnabled()) {

Review comment:
       SLF4J doesn't need the isDebugEnabled() wrappers unless the log is doing any complex work. But here it might be good to keep and have the log also note the socket address, e,g
   
   ```java
   LOG.debug("Connection received from {}", clientSocket.getInetAddress());
   ```

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Application.java
##########
@@ -266,4 +271,39 @@ public static String createDigest(byte[] password, String data)
     return SecureShuffleUtils.hashFromString(data, key);
   }
 
+  private class PingSocketCleaner extends Thread {
+    PingSocketCleaner(String name) {
+      super(name);
+    }
+
+    @Override
+    public void run() {
+      LOG.info("PingSocketCleaner started...");
+      while (true) {
+        Socket clientSocket = null;
+        try {
+          clientSocket = serverSocket.accept();
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("Got one client socket...");
+          }
+          int readData = clientSocket.getInputStream().read();

Review comment:
       so we accept() a connection, then read() one byte. If there is no data or there's an IOE, the socket is closed. But what if there is a byte()? what happens to the clientSocket?

##########
File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Application.java
##########
@@ -266,4 +271,39 @@ public static String createDigest(byte[] password, String data)
     return SecureShuffleUtils.hashFromString(data, key);
   }
 
+  private class PingSocketCleaner extends Thread {
+    PingSocketCleaner(String name) {
+      super(name);
+    }
+
+    @Override
+    public void run() {
+      LOG.info("PingSocketCleaner started...");
+      while (true) {
+        Socket clientSocket = null;
+        try {
+          clientSocket = serverSocket.accept();
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("Got one client socket...");
+          }
+          int readData = clientSocket.getInputStream().read();
+          if (readData == -1) {
+            if (LOG.isDebugEnabled()) {
+              LOG.debug("close socket cause client has closed.");
+            }
+            clientSocket.close();
+          }
+        } catch (IOException exception) {
+          LOG.error("PingSocketCleaner exception", exception);
+          if (clientSocket != null) {

Review comment:
       use org.apache.hadoop.io.IOUtils.cleanupWithLogger()




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org