You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2021/03/02 03:15:50 UTC

[GitHub] [hadoop] functioner removed a comment on pull request #2727: HADOOP-17552. The read method of hadoop.ipc.Client$Connection$PingInputStream may swallow java.net.SocketTimeoutException due to the mistaken usage of the rpcTimeout configuration

functioner removed a comment on pull request #2727:
URL: https://github.com/apache/hadoop/pull/2727#issuecomment-788419641


   > > According to the comment in that test case, it "should not time out because effective rpc-timeout is multiple of ping interval: 1600 (= 800 * (1000 / 800 + 1))", and it doesn't mean that it shouldn't time out.
   > 
   > The SocketTimeoutException is thrown on `Socket#read` based on the timeout value set by `Socket#setSoTimeout`. It is set to 800(pingInterval) in the test because (pingInterval < rpcTimeout). The SocketTimeoutException is thrown at 800ms first time and swallowed because 800 < rpcTimeout. SocketTimeoutException is thrown again at 1600ms then handled because 1600 > rpcTimeout.
   
   I agree with your explanation. Actually what I want to say is that we should set an appropriate value for rpcTimeout in Server.java, and the current behavior is not satisfactory. Probably we should do something like `rpcTimeout = pingInterval * 2`. What do you think? @iwasakims 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org