You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Ankit Singhal (JIRA)" <ji...@apache.org> on 2016/11/23 12:06:59 UTC
[jira] [Created] (HBASE-17170) HBase is also retrying
DoNotRetryIOException because of class loader differences.
Ankit Singhal created HBASE-17170:
-------------------------------------
Summary: HBase is also retrying DoNotRetryIOException because of class loader differences.
Key: HBASE-17170
URL: https://issues.apache.org/jira/browse/HBASE-17170
Project: HBase
Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal
The class loader used by API exposed by hadoop and the context class loader used by RunJar(bin/hadoop jar phoenix-client.jar …. ) are different resulting in classes loaded from jar not visible to other current class loader used by API.
The actual problem is stated in the comment below https://issues.apache.org/jira/browse/PHOENIX-3495?focusedCommentId=15677081&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15677081
If we are not loading hbase classes from Hadoop classpath(from where hadoop jars are getting loaded), then the RemoteException will not get unwrapped because of ClassNotFoundException and the client will keep on retrying even if the cause of exception is DoNotRetryIOException.
public class RpcRetryingCaller<T> {
public IOException unwrapRemoteException() {
try {
Class<?> realClass = Class.forName(getClassName());
return instantiateException(realClass.asSubclass(IOException.class));
} catch(Exception e) {
// cannot instantiate the original exception, just return this
}
return this;
}
*Possible solution:-*
We can create our own HBaseRemoteWithExtrasException(extension of RemoteWithExtrasException) so that default class loader will be the one from where the hbase classes are loaded and extend unwrapRemoteException() to throw exception if the unwraping doesn’t take place because of CNF exception?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)