You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "Arpit Agarwal (Jira)" <ji...@apache.org> on 2020/02/20 21:27:00 UTC

[jira] [Updated] (HDDS-3046) Fix Retry handling in Hadoop RPC Client

     [ https://issues.apache.org/jira/browse/HDDS-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arpit Agarwal updated HDDS-3046:
--------------------------------
    Summary: Fix Retry handling in Hadoop RPC Client  (was: Fix Retry handling in Rpc Client)

> Fix Retry handling in Hadoop RPC Client
> ---------------------------------------
>
>                 Key: HDDS-3046
>                 URL: https://issues.apache.org/jira/browse/HDDS-3046
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>            Reporter: Bharat Viswanadham
>            Assignee: Bharat Viswanadham
>            Priority: Major
>              Labels: OMHA
>
> Right now for all other exceptions other than serviceException we use FailOverOnNetworkException.
> This Exception policy is created with 15 max fail overs and 15 retries. 
>  
> {code:java}
> retryPolicyOnNetworkException.shouldRetry(
>  exception, retries, failovers, isIdempotentOrAtMostOnce);{code}
> *2 issues with this:*
>  # When shouldRetry returns action FAILOVER_AND_RETRY, it will stuck with same OM, and does not perform failover to next OM.  As OMFailoverProxyProvider#performFailover() is a dummy call does not perform any failover.
>  # When ozone.client.failover.max.attempts is set to 15, now with 2 policies with each set to 15, we will retry 15*2 times in worst scenario. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org