You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by GitBox <gi...@apache.org> on 2021/03/29 20:40:14 UTC

[GitHub] [hbase] saintstack commented on a change in pull request #3030: HBASE-25634 The client scan frequently exceeds the quota, which cause…

saintstack commented on a change in pull request #3030:
URL: https://github.com/apache/hbase/pull/3030#discussion_r603598168



##########
File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java
##########
@@ -102,7 +103,13 @@ public T callWithRetries(RetryingCallable<T> callable, int callTimeout)
       long expectedSleep;
       try {
         // bad cache entries are cleared in the call to RetryingCallable#throwable() in catch block
-        callable.prepare(tries != 0);
+        Throwable t = null;
+        if (exceptions != null && !exceptions.isEmpty()) {
+          t = exceptions.get(exceptions.size() - 1).throwable;
+        }
+        if (!(t instanceof RpcThrottlingException)) {
+          callable.prepare(tries != 0);
+        }

Review comment:
       So, the idea here is that if we got an exception and we are retrying, do NOT reload cache if the exception was a because we were throttled?
   
   This is a good idea. I wonder if there are other exceptions where we retry but do not need to reload the cache?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org