You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Hari Shreedharan (JIRA)" <ji...@apache.org> on 2012/11/29 22:02:58 UTC

[jira] [Created] (HADOOP-9107) Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented

Hari Shreedharan created HADOOP-9107:
----------------------------------------

             Summary: Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented
                 Key: HADOOP-9107
                 URL: https://issues.apache.org/jira/browse/HADOOP-9107
             Project: Hadoop Common
          Issue Type: Bug
          Components: ipc
    Affects Versions: 2.0.2-alpha
            Reporter: Hari Shreedharan


This code in Client.java looks fishy:

{code}
  public Writable call(RPC.RpcKind rpcKind, Writable rpcRequest,
      ConnectionId remoteId) throws InterruptedException, IOException {
    Call call = new Call(rpcKind, rpcRequest);
    Connection connection = getConnection(remoteId, call);
    connection.sendParam(call);                 // send the parameter
    boolean interrupted = false;
    synchronized (call) {
      while (!call.done) {
        try {
          call.wait();                           // wait for the result
        } catch (InterruptedException ie) {
          // save the fact that we were interrupted
          interrupted = true;
        }
      }

      if (interrupted) {
        // set the interrupt flag now that we are done waiting
        Thread.currentThread().interrupt();
      }

      if (call.error != null) {
        if (call.error instanceof RemoteException) {
          call.error.fillInStackTrace();
          throw call.error;
        } else { // local exception
          InetSocketAddress address = connection.getRemoteAddress();
          throw NetUtils.wrapException(address.getHostName(),
                  address.getPort(),
                  NetUtils.getHostname(),
                  0,
                  call.error);
        }
      } else {
        return call.getRpcResult();
      }
    }
  }
{code}

Blocking calls are expected to throw InterruptedException if that is interrupted. Also it seems like this method waits on the call objects even if it  is interrupted. Currently, this method does not throw an InterruptedException, nor is it documented that this method interrupts the thread calling it. If it is interrupted, this method should still throw InterruptedException, it should not matter if the call was successful or not.

This is a major issue for clients which do not call this directly, but call HDFS client API methods to write to HDFS, which may be interrupted by the client due to timeouts, but does not throw InterruptedException. Any HDFS client calls can interrupt the thread but it is not documented anywhere. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HADOOP-9107) Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented

Posted by "Steve Loughran (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507321#comment-13507321 ] 

Steve Loughran commented on HADOOP-9107:
----------------------------------------

This is similar to HADOOP-6221, though you are proposing more cleanup. 

Could you use that patch and test as a starting point?
                
> Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9107
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9107
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 1.1.0, 2.0.2-alpha
>            Reporter: Hari Shreedharan
>
> This code in Client.java looks fishy:
> {code}
>   public Writable call(RPC.RpcKind rpcKind, Writable rpcRequest,
>       ConnectionId remoteId) throws InterruptedException, IOException {
>     Call call = new Call(rpcKind, rpcRequest);
>     Connection connection = getConnection(remoteId, call);
>     connection.sendParam(call);                 // send the parameter
>     boolean interrupted = false;
>     synchronized (call) {
>       while (!call.done) {
>         try {
>           call.wait();                           // wait for the result
>         } catch (InterruptedException ie) {
>           // save the fact that we were interrupted
>           interrupted = true;
>         }
>       }
>       if (interrupted) {
>         // set the interrupt flag now that we are done waiting
>         Thread.currentThread().interrupt();
>       }
>       if (call.error != null) {
>         if (call.error instanceof RemoteException) {
>           call.error.fillInStackTrace();
>           throw call.error;
>         } else { // local exception
>           InetSocketAddress address = connection.getRemoteAddress();
>           throw NetUtils.wrapException(address.getHostName(),
>                   address.getPort(),
>                   NetUtils.getHostname(),
>                   0,
>                   call.error);
>         }
>       } else {
>         return call.getRpcResult();
>       }
>     }
>   }
> {code}
> Blocking calls are expected to throw InterruptedException if that is interrupted. Also it seems like this method waits on the call objects even if it  is interrupted. Currently, this method does not throw an InterruptedException, nor is it documented that this method interrupts the thread calling it. If it is interrupted, this method should still throw InterruptedException, it should not matter if the call was successful or not.
> This is a major issue for clients which do not call this directly, but call HDFS client API methods to write to HDFS, which may be interrupted by the client due to timeouts, but does not throw InterruptedException. Any HDFS client calls can interrupt the thread but it is not documented anywhere. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HADOOP-9107) Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented

Posted by "Hari Shreedharan (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506980#comment-13506980 ] 

Hari Shreedharan commented on HADOOP-9107:
------------------------------------------

(1) is insufficient since clients often do not directly call this method. I believe that if this method gets interrupted:
* Clean up the call object - seems like some clean up is required in the Connection object.
* throw InterruptedException, regardless of whether the calls complete successfully or not.
                
> Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9107
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9107
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 1.1.0, 2.0.2-alpha
>            Reporter: Hari Shreedharan
>
> This code in Client.java looks fishy:
> {code}
>   public Writable call(RPC.RpcKind rpcKind, Writable rpcRequest,
>       ConnectionId remoteId) throws InterruptedException, IOException {
>     Call call = new Call(rpcKind, rpcRequest);
>     Connection connection = getConnection(remoteId, call);
>     connection.sendParam(call);                 // send the parameter
>     boolean interrupted = false;
>     synchronized (call) {
>       while (!call.done) {
>         try {
>           call.wait();                           // wait for the result
>         } catch (InterruptedException ie) {
>           // save the fact that we were interrupted
>           interrupted = true;
>         }
>       }
>       if (interrupted) {
>         // set the interrupt flag now that we are done waiting
>         Thread.currentThread().interrupt();
>       }
>       if (call.error != null) {
>         if (call.error instanceof RemoteException) {
>           call.error.fillInStackTrace();
>           throw call.error;
>         } else { // local exception
>           InetSocketAddress address = connection.getRemoteAddress();
>           throw NetUtils.wrapException(address.getHostName(),
>                   address.getPort(),
>                   NetUtils.getHostname(),
>                   0,
>                   call.error);
>         }
>       } else {
>         return call.getRpcResult();
>       }
>     }
>   }
> {code}
> Blocking calls are expected to throw InterruptedException if that is interrupted. Also it seems like this method waits on the call objects even if it  is interrupted. Currently, this method does not throw an InterruptedException, nor is it documented that this method interrupts the thread calling it. If it is interrupted, this method should still throw InterruptedException, it should not matter if the call was successful or not.
> This is a major issue for clients which do not call this directly, but call HDFS client API methods to write to HDFS, which may be interrupted by the client due to timeouts, but does not throw InterruptedException. Any HDFS client calls can interrupt the thread but it is not documented anywhere. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HADOOP-9107) Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented

Posted by "Karthik Kambatla (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507701#comment-13507701 ] 

Karthik Kambatla commented on HADOOP-9107:
------------------------------------------

>From HADOOP-6221:
bq. I think a good tactic would be rather than trying to make the old RPC stack interruptible, focus on making Avro something that you can interrupt, so that going forward you can interrupt client programs trying to talk to unresponsive servers.

Steve, is there a reason for not making the old RPC stack interruptible?

I feel we should do both - what Hari is proposing here, and what HADOOP-6221 addresses.
                
> Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9107
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9107
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 1.1.0, 2.0.2-alpha
>            Reporter: Hari Shreedharan
>
> This code in Client.java looks fishy:
> {code}
>   public Writable call(RPC.RpcKind rpcKind, Writable rpcRequest,
>       ConnectionId remoteId) throws InterruptedException, IOException {
>     Call call = new Call(rpcKind, rpcRequest);
>     Connection connection = getConnection(remoteId, call);
>     connection.sendParam(call);                 // send the parameter
>     boolean interrupted = false;
>     synchronized (call) {
>       while (!call.done) {
>         try {
>           call.wait();                           // wait for the result
>         } catch (InterruptedException ie) {
>           // save the fact that we were interrupted
>           interrupted = true;
>         }
>       }
>       if (interrupted) {
>         // set the interrupt flag now that we are done waiting
>         Thread.currentThread().interrupt();
>       }
>       if (call.error != null) {
>         if (call.error instanceof RemoteException) {
>           call.error.fillInStackTrace();
>           throw call.error;
>         } else { // local exception
>           InetSocketAddress address = connection.getRemoteAddress();
>           throw NetUtils.wrapException(address.getHostName(),
>                   address.getPort(),
>                   NetUtils.getHostname(),
>                   0,
>                   call.error);
>         }
>       } else {
>         return call.getRpcResult();
>       }
>     }
>   }
> {code}
> Blocking calls are expected to throw InterruptedException if that is interrupted. Also it seems like this method waits on the call objects even if it  is interrupted. Currently, this method does not throw an InterruptedException, nor is it documented that this method interrupts the thread calling it. If it is interrupted, this method should still throw InterruptedException, it should not matter if the call was successful or not.
> This is a major issue for clients which do not call this directly, but call HDFS client API methods to write to HDFS, which may be interrupted by the client due to timeouts, but does not throw InterruptedException. Any HDFS client calls can interrupt the thread but it is not documented anywhere. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HADOOP-9107) Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented

Posted by "Hari Shreedharan (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507660#comment-13507660 ] 

Hari Shreedharan commented on HADOOP-9107:
------------------------------------------

I agree that both are pretty similar, but I think we still need to do the cleanup I am proposing here right?
                
> Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9107
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9107
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 1.1.0, 2.0.2-alpha
>            Reporter: Hari Shreedharan
>
> This code in Client.java looks fishy:
> {code}
>   public Writable call(RPC.RpcKind rpcKind, Writable rpcRequest,
>       ConnectionId remoteId) throws InterruptedException, IOException {
>     Call call = new Call(rpcKind, rpcRequest);
>     Connection connection = getConnection(remoteId, call);
>     connection.sendParam(call);                 // send the parameter
>     boolean interrupted = false;
>     synchronized (call) {
>       while (!call.done) {
>         try {
>           call.wait();                           // wait for the result
>         } catch (InterruptedException ie) {
>           // save the fact that we were interrupted
>           interrupted = true;
>         }
>       }
>       if (interrupted) {
>         // set the interrupt flag now that we are done waiting
>         Thread.currentThread().interrupt();
>       }
>       if (call.error != null) {
>         if (call.error instanceof RemoteException) {
>           call.error.fillInStackTrace();
>           throw call.error;
>         } else { // local exception
>           InetSocketAddress address = connection.getRemoteAddress();
>           throw NetUtils.wrapException(address.getHostName(),
>                   address.getPort(),
>                   NetUtils.getHostname(),
>                   0,
>                   call.error);
>         }
>       } else {
>         return call.getRpcResult();
>       }
>     }
>   }
> {code}
> Blocking calls are expected to throw InterruptedException if that is interrupted. Also it seems like this method waits on the call objects even if it  is interrupted. Currently, this method does not throw an InterruptedException, nor is it documented that this method interrupts the thread calling it. If it is interrupted, this method should still throw InterruptedException, it should not matter if the call was successful or not.
> This is a major issue for clients which do not call this directly, but call HDFS client API methods to write to HDFS, which may be interrupted by the client due to timeouts, but does not throw InterruptedException. Any HDFS client calls can interrupt the thread but it is not documented anywhere. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HADOOP-9107) Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented

Posted by "Hari Shreedharan (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507081#comment-13507081 ] 

Hari Shreedharan commented on HADOOP-9107:
------------------------------------------

My take on what should really happen in the catch block:
* call.setException()
* Remove call from the calls table.
* In the receiveResponse method, check if calls.get(callId) returns null before proceeding.
* throw the InterruptedException (or wrap it and then throw), so client code can know something went wrong and the call failed.
                
> Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9107
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9107
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 1.1.0, 2.0.2-alpha
>            Reporter: Hari Shreedharan
>
> This code in Client.java looks fishy:
> {code}
>   public Writable call(RPC.RpcKind rpcKind, Writable rpcRequest,
>       ConnectionId remoteId) throws InterruptedException, IOException {
>     Call call = new Call(rpcKind, rpcRequest);
>     Connection connection = getConnection(remoteId, call);
>     connection.sendParam(call);                 // send the parameter
>     boolean interrupted = false;
>     synchronized (call) {
>       while (!call.done) {
>         try {
>           call.wait();                           // wait for the result
>         } catch (InterruptedException ie) {
>           // save the fact that we were interrupted
>           interrupted = true;
>         }
>       }
>       if (interrupted) {
>         // set the interrupt flag now that we are done waiting
>         Thread.currentThread().interrupt();
>       }
>       if (call.error != null) {
>         if (call.error instanceof RemoteException) {
>           call.error.fillInStackTrace();
>           throw call.error;
>         } else { // local exception
>           InetSocketAddress address = connection.getRemoteAddress();
>           throw NetUtils.wrapException(address.getHostName(),
>                   address.getPort(),
>                   NetUtils.getHostname(),
>                   0,
>                   call.error);
>         }
>       } else {
>         return call.getRpcResult();
>       }
>     }
>   }
> {code}
> Blocking calls are expected to throw InterruptedException if that is interrupted. Also it seems like this method waits on the call objects even if it  is interrupted. Currently, this method does not throw an InterruptedException, nor is it documented that this method interrupts the thread calling it. If it is interrupted, this method should still throw InterruptedException, it should not matter if the call was successful or not.
> This is a major issue for clients which do not call this directly, but call HDFS client API methods to write to HDFS, which may be interrupted by the client due to timeouts, but does not throw InterruptedException. Any HDFS client calls can interrupt the thread but it is not documented anywhere. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HADOOP-9107) Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented

Posted by "Steve Loughran (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13508588#comment-13508588 ] 

Steve Loughran commented on HADOOP-9107:
----------------------------------------

given the RPC stack is still around, +1 to making it interruptible -it hurts external clients the most.

and +1 to both fixes -they should all go together
                
> Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9107
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9107
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 1.1.0, 2.0.2-alpha
>            Reporter: Hari Shreedharan
>
> This code in Client.java looks fishy:
> {code}
>   public Writable call(RPC.RpcKind rpcKind, Writable rpcRequest,
>       ConnectionId remoteId) throws InterruptedException, IOException {
>     Call call = new Call(rpcKind, rpcRequest);
>     Connection connection = getConnection(remoteId, call);
>     connection.sendParam(call);                 // send the parameter
>     boolean interrupted = false;
>     synchronized (call) {
>       while (!call.done) {
>         try {
>           call.wait();                           // wait for the result
>         } catch (InterruptedException ie) {
>           // save the fact that we were interrupted
>           interrupted = true;
>         }
>       }
>       if (interrupted) {
>         // set the interrupt flag now that we are done waiting
>         Thread.currentThread().interrupt();
>       }
>       if (call.error != null) {
>         if (call.error instanceof RemoteException) {
>           call.error.fillInStackTrace();
>           throw call.error;
>         } else { // local exception
>           InetSocketAddress address = connection.getRemoteAddress();
>           throw NetUtils.wrapException(address.getHostName(),
>                   address.getPort(),
>                   NetUtils.getHostname(),
>                   0,
>                   call.error);
>         }
>       } else {
>         return call.getRpcResult();
>       }
>     }
>   }
> {code}
> Blocking calls are expected to throw InterruptedException if that is interrupted. Also it seems like this method waits on the call objects even if it  is interrupted. Currently, this method does not throw an InterruptedException, nor is it documented that this method interrupts the thread calling it. If it is interrupted, this method should still throw InterruptedException, it should not matter if the call was successful or not.
> This is a major issue for clients which do not call this directly, but call HDFS client API methods to write to HDFS, which may be interrupted by the client due to timeouts, but does not throw InterruptedException. Any HDFS client calls can interrupt the thread but it is not documented anywhere. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HADOOP-9107) Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented

Posted by "Hari Shreedharan (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13508592#comment-13508592 ] 

Hari Shreedharan commented on HADOOP-9107:
------------------------------------------

Karthik, Steve - makes complete sense.
                
> Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9107
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9107
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 1.1.0, 2.0.2-alpha
>            Reporter: Hari Shreedharan
>
> This code in Client.java looks fishy:
> {code}
>   public Writable call(RPC.RpcKind rpcKind, Writable rpcRequest,
>       ConnectionId remoteId) throws InterruptedException, IOException {
>     Call call = new Call(rpcKind, rpcRequest);
>     Connection connection = getConnection(remoteId, call);
>     connection.sendParam(call);                 // send the parameter
>     boolean interrupted = false;
>     synchronized (call) {
>       while (!call.done) {
>         try {
>           call.wait();                           // wait for the result
>         } catch (InterruptedException ie) {
>           // save the fact that we were interrupted
>           interrupted = true;
>         }
>       }
>       if (interrupted) {
>         // set the interrupt flag now that we are done waiting
>         Thread.currentThread().interrupt();
>       }
>       if (call.error != null) {
>         if (call.error instanceof RemoteException) {
>           call.error.fillInStackTrace();
>           throw call.error;
>         } else { // local exception
>           InetSocketAddress address = connection.getRemoteAddress();
>           throw NetUtils.wrapException(address.getHostName(),
>                   address.getPort(),
>                   NetUtils.getHostname(),
>                   0,
>                   call.error);
>         }
>       } else {
>         return call.getRpcResult();
>       }
>     }
>   }
> {code}
> Blocking calls are expected to throw InterruptedException if that is interrupted. Also it seems like this method waits on the call objects even if it  is interrupted. Currently, this method does not throw an InterruptedException, nor is it documented that this method interrupts the thread calling it. If it is interrupted, this method should still throw InterruptedException, it should not matter if the call was successful or not.
> This is a major issue for clients which do not call this directly, but call HDFS client API methods to write to HDFS, which may be interrupted by the client due to timeouts, but does not throw InterruptedException. Any HDFS client calls can interrupt the thread but it is not documented anywhere. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (HADOOP-9107) Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented

Posted by "Hari Shreedharan (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hari Shreedharan updated HADOOP-9107:
-------------------------------------

    Affects Version/s: 1.1.0
    
> Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9107
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9107
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 1.1.0, 2.0.2-alpha
>            Reporter: Hari Shreedharan
>
> This code in Client.java looks fishy:
> {code}
>   public Writable call(RPC.RpcKind rpcKind, Writable rpcRequest,
>       ConnectionId remoteId) throws InterruptedException, IOException {
>     Call call = new Call(rpcKind, rpcRequest);
>     Connection connection = getConnection(remoteId, call);
>     connection.sendParam(call);                 // send the parameter
>     boolean interrupted = false;
>     synchronized (call) {
>       while (!call.done) {
>         try {
>           call.wait();                           // wait for the result
>         } catch (InterruptedException ie) {
>           // save the fact that we were interrupted
>           interrupted = true;
>         }
>       }
>       if (interrupted) {
>         // set the interrupt flag now that we are done waiting
>         Thread.currentThread().interrupt();
>       }
>       if (call.error != null) {
>         if (call.error instanceof RemoteException) {
>           call.error.fillInStackTrace();
>           throw call.error;
>         } else { // local exception
>           InetSocketAddress address = connection.getRemoteAddress();
>           throw NetUtils.wrapException(address.getHostName(),
>                   address.getPort(),
>                   NetUtils.getHostname(),
>                   0,
>                   call.error);
>         }
>       } else {
>         return call.getRpcResult();
>       }
>     }
>   }
> {code}
> Blocking calls are expected to throw InterruptedException if that is interrupted. Also it seems like this method waits on the call objects even if it  is interrupted. Currently, this method does not throw an InterruptedException, nor is it documented that this method interrupts the thread calling it. If it is interrupted, this method should still throw InterruptedException, it should not matter if the call was successful or not.
> This is a major issue for clients which do not call this directly, but call HDFS client API methods to write to HDFS, which may be interrupted by the client due to timeouts, but does not throw InterruptedException. Any HDFS client calls can interrupt the thread but it is not documented anywhere. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HADOOP-9107) Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented

Posted by "Karthik Kambatla (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506853#comment-13506853 ] 

Karthik Kambatla commented on HADOOP-9107:
------------------------------------------

The things to fix look like:
# document that the method eats up {{InterruptedException}}
# break after setting interrupted to true in the catch block
# throw appropriate exception in the {{else}} branch of {{if (call.error != null)}}
                
> Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9107
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9107
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 1.1.0, 2.0.2-alpha
>            Reporter: Hari Shreedharan
>
> This code in Client.java looks fishy:
> {code}
>   public Writable call(RPC.RpcKind rpcKind, Writable rpcRequest,
>       ConnectionId remoteId) throws InterruptedException, IOException {
>     Call call = new Call(rpcKind, rpcRequest);
>     Connection connection = getConnection(remoteId, call);
>     connection.sendParam(call);                 // send the parameter
>     boolean interrupted = false;
>     synchronized (call) {
>       while (!call.done) {
>         try {
>           call.wait();                           // wait for the result
>         } catch (InterruptedException ie) {
>           // save the fact that we were interrupted
>           interrupted = true;
>         }
>       }
>       if (interrupted) {
>         // set the interrupt flag now that we are done waiting
>         Thread.currentThread().interrupt();
>       }
>       if (call.error != null) {
>         if (call.error instanceof RemoteException) {
>           call.error.fillInStackTrace();
>           throw call.error;
>         } else { // local exception
>           InetSocketAddress address = connection.getRemoteAddress();
>           throw NetUtils.wrapException(address.getHostName(),
>                   address.getPort(),
>                   NetUtils.getHostname(),
>                   0,
>                   call.error);
>         }
>       } else {
>         return call.getRpcResult();
>       }
>     }
>   }
> {code}
> Blocking calls are expected to throw InterruptedException if that is interrupted. Also it seems like this method waits on the call objects even if it  is interrupted. Currently, this method does not throw an InterruptedException, nor is it documented that this method interrupts the thread calling it. If it is interrupted, this method should still throw InterruptedException, it should not matter if the call was successful or not.
> This is a major issue for clients which do not call this directly, but call HDFS client API methods to write to HDFS, which may be interrupted by the client due to timeouts, but does not throw InterruptedException. Any HDFS client calls can interrupt the thread but it is not documented anywhere. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (HADOOP-9107) Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented

Posted by "Hari Shreedharan (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13506983#comment-13506983 ] 

Hari Shreedharan commented on HADOOP-9107:
------------------------------------------

To ensure that the real client that calls this should know that the call was interrupted, rather than forcing it to check the thread's interrupt flag. 
                
> Hadoop IPC client eats InterruptedException and sets interrupt on the thread which is not documented
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9107
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9107
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 1.1.0, 2.0.2-alpha
>            Reporter: Hari Shreedharan
>
> This code in Client.java looks fishy:
> {code}
>   public Writable call(RPC.RpcKind rpcKind, Writable rpcRequest,
>       ConnectionId remoteId) throws InterruptedException, IOException {
>     Call call = new Call(rpcKind, rpcRequest);
>     Connection connection = getConnection(remoteId, call);
>     connection.sendParam(call);                 // send the parameter
>     boolean interrupted = false;
>     synchronized (call) {
>       while (!call.done) {
>         try {
>           call.wait();                           // wait for the result
>         } catch (InterruptedException ie) {
>           // save the fact that we were interrupted
>           interrupted = true;
>         }
>       }
>       if (interrupted) {
>         // set the interrupt flag now that we are done waiting
>         Thread.currentThread().interrupt();
>       }
>       if (call.error != null) {
>         if (call.error instanceof RemoteException) {
>           call.error.fillInStackTrace();
>           throw call.error;
>         } else { // local exception
>           InetSocketAddress address = connection.getRemoteAddress();
>           throw NetUtils.wrapException(address.getHostName(),
>                   address.getPort(),
>                   NetUtils.getHostname(),
>                   0,
>                   call.error);
>         }
>       } else {
>         return call.getRpcResult();
>       }
>     }
>   }
> {code}
> Blocking calls are expected to throw InterruptedException if that is interrupted. Also it seems like this method waits on the call objects even if it  is interrupted. Currently, this method does not throw an InterruptedException, nor is it documented that this method interrupts the thread calling it. If it is interrupted, this method should still throw InterruptedException, it should not matter if the call was successful or not.
> This is a major issue for clients which do not call this directly, but call HDFS client API methods to write to HDFS, which may be interrupted by the client due to timeouts, but does not throw InterruptedException. Any HDFS client calls can interrupt the thread but it is not documented anywhere. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira