You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Henry Hung <YT...@winbond.com> on 2013/11/21 02:43:35 UTC
hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called
on non proxy.
Hi All,
When stopping master or regionserver, I found some ERROR and WARN in the log files, are these errors can cause problem in hbase:
13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
13/11/21 09:35:36 ERROR ipc.RPC: RPC.stopProxy called on non proxy.
java.lang.IllegalArgumentException: object is not an instance of declaring class
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
at $Proxy18.close(Unknown Source)
at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:621)
at org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
13/11/21 09:35:36 WARN util.ShutdownHookManager: ShutdownHook 'ClientFinalizer' failed, org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not Closeable or does not provide closeable invocation handler class $Proxy18
org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is not Closeable or does not provide closeable invocation handler class $Proxy18
at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:639)
at org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
Best regards,
Henry
________________________________
The privileged confidential information contained in this email is intended for use only by the addressees as indicated by the original sender of this email. If you are not the addressee indicated in this email or are not responsible for delivery of the email to such a person, please kindly reply to the sender indicating this fact and delete all copies of it from your computer and network server immediately. Your cooperation is highly appreciated. It is advised that any unauthorized use of confidential information of Winbond is strictly prohibited; and any information in this email irrelevant to the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
called on non proxy.
Posted by Ted Yu <yu...@gmail.com>.
Henry:
See HBASE-7635 Proxy created by HFileSystem#createReorderingProxy() should
implement Closeable
On Tue, Nov 26, 2013 at 1:56 AM, Ted Yu <yu...@gmail.com> wrote:
> Here is the caller to createReorderingProxy():
>
> ClientProtocol cp1 = createReorderingProxy(namenode, lrb, conf);
>
> where namenode
> is org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB :
>
> public class ClientNamenodeProtocolTranslatorPB implements
>
> ProtocolMetaInterface, ClientProtocol, Closeable, ProtocolTranslator {
>
> In createReorderingProxy() :
>
> new Class[]{ClientProtocol.class, Closeable.class},
>
> We ask for Closeable interface.
>
>
> Did the error persist after you replaced with the hadoop-hdfs-2.2.0.jar ?
> Meaning, did you start HBase using the new hadoop jars ?
>
> Cheers
>
>
> On Mon, Nov 25, 2013 at 1:04 PM, Henry Hung <YT...@winbond.com> wrote:
>
>> I looked into the source code of
>> org/apache/hadoop/hbase/fs/HFileSystem.java
>> and whenever I execute hbase-daemon.sh stop master (or regionserver), the
>> method.getName() is "close",
>> but the org/apache/hadoop/hdfs/protocol/ClientProtocol.java do not have
>> method named "close",
>> thus it result in error "object is not an instance of declaring class"
>>
>> Could someone that familiar with hbase-0.96.0 hadoop2 can tell me if this
>> problem need to be fixed? And how to fix it?
>>
>> private static ClientProtocol createReorderingProxy(final
>> ClientProtocol cp, final ReorderBlocks lrb, final Configuration conf) {
>> return (ClientProtocol) Proxy.newProxyInstance
>> (cp.getClass().getClassLoader(),
>> new Class[]{ClientProtocol.class, Closeable.class},
>> new InvocationHandler() {
>> public Object invoke(Object proxy, Method method,
>> Object[] args) throws Throwable {
>> try {
>> // method.invoke will failed if
>> method.getName().equals("close")
>> // because ClientProtocol do not have method "close"
>> Object res = method.invoke(cp, args);
>> if (res != null && args != null && args.length == 3
>> && "getBlockLocations".equals(method.getName())
>> && res instanceof LocatedBlocks
>> && args[0] instanceof String
>> && args[0] != null) {
>> lrb.reorderBlocks(conf, (LocatedBlocks) res, (String)
>> args[0]);
>> }
>> return res;
>> } catch (InvocationTargetException ite) {
>> // We will have this for all the exception, checked on
>> not, sent
>> // by any layer, including the functional exception
>> Throwable cause = ite.getCause();
>> if (cause == null){
>> throw new RuntimeException(
>> "Proxy invocation failed and getCause is null",
>> ite);
>> }
>> if (cause instanceof UndeclaredThrowableException) {
>> Throwable causeCause = cause.getCause();
>> if (causeCause == null) {
>> throw new
>> RuntimeException("UndeclaredThrowableException had null cause!");
>> }
>> cause = cause.getCause();
>> }
>> throw cause;
>> }
>> }
>> });
>> }
>>
>>
>>
>> -----Original Message-----
>> From: MA11 YTHung1
>> Sent: Thursday, November 21, 2013 9:57 AM
>> To: user@hbase.apache.org
>> Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
>> called on non proxy.
>>
>> Additional information:
>>
>> I replace all files with prefix hadoop in hbase-0.96.0-hadoop2/lib with
>> hadoop-2.2.0 libraries.
>>
>> the ls -l of hbase-0.96.0-hadoop2/lib as below:
>>
>> -rw-r--r-- 1 hadoop users 62983 Sep 17 16:13 activation-1.1.jar
>> -rw-r--r-- 1 hadoop users 4467 Sep 17 23:29 aopalliance-1.0.jar
>> -rw-r--r-- 1 hadoop users 43033 Sep 17 16:13 asm-3.1.jar
>> -rw-r--r-- 1 hadoop users 263268 Sep 17 16:27 avro-1.5.3.jar
>> -rw-r--r-- 1 hadoop users 188671 Sep 17 16:12
>> commons-beanutils-1.7.0.jar
>> -rw-r--r-- 1 hadoop users 206035 Sep 17 16:13
>> commons-beanutils-core-1.8.0.jar
>> -rw-r--r-- 1 hadoop users 41123 Sep 17 16:12 commons-cli-1.2.jar
>> -rw-r--r-- 1 hadoop users 259600 Sep 17 16:13 commons-codec-1.7.jar
>> -rw-r--r-- 1 hadoop users 575389 Sep 17 16:12
>> commons-collections-3.2.1.jar
>> -rw-r--r-- 1 hadoop users 238681 Sep 17 16:27 commons-compress-1.4.jar
>> -rw-r--r-- 1 hadoop users 298829 Sep 17 16:13
>> commons-configuration-1.6.jar
>> -rw-r--r-- 1 hadoop users 24239 Sep 17 23:28 commons-daemon-1.0.13.jar
>> -rw-r--r-- 1 hadoop users 143602 Sep 17 16:12 commons-digester-1.8.jar
>> -rw-r--r-- 1 hadoop users 112341 Sep 17 16:13 commons-el-1.0.jar
>> -rw-r--r-- 1 hadoop users 305001 Sep 17 16:12 commons-httpclient-3.1.jar
>> -rw-r--r-- 1 hadoop users 185140 Sep 17 16:13 commons-io-2.4.jar
>> -rw-r--r-- 1 hadoop users 284220 Sep 17 16:12 commons-lang-2.6.jar
>> -rw-r--r-- 1 hadoop users 60686 Sep 17 16:12 commons-logging-1.1.1.jar
>> -rw-r--r-- 1 hadoop users 988514 Sep 17 16:13 commons-math-2.2.jar
>> -rw-r--r-- 1 hadoop users 273370 Sep 17 16:27 commons-net-3.1.jar
>> -rw-r--r-- 1 hadoop users 3566844 Sep 17 16:15 core-3.1.1.jar
>> -rw-r--r-- 1 hadoop users 15322 Sep 17 16:12
>> findbugs-annotations-1.3.9-1.jar
>> -rw-r--r-- 1 hadoop users 21817 Sep 17 23:29
>> gmbal-api-only-3.0.0-b023.jar
>> -rw-r--r-- 1 hadoop users 684337 Sep 17 23:29
>> grizzly-framework-2.1.1.jar
>> -rw-r--r-- 1 hadoop users 210846 Sep 17 23:29
>> grizzly-framework-2.1.1-tests.jar
>> -rw-r--r-- 1 hadoop users 248346 Sep 17 23:29 grizzly-http-2.1.1.jar
>> -rw-r--r-- 1 hadoop users 193583 Sep 17 23:29
>> grizzly-http-server-2.1.1.jar
>> -rw-r--r-- 1 hadoop users 336878 Sep 17 23:29
>> grizzly-http-servlet-2.1.1.jar
>> -rw-r--r-- 1 hadoop users 8072 Sep 17 23:29 grizzly-rcm-2.1.1.jar
>> -rw-r--r-- 1 hadoop users 1795932 Sep 17 16:13 guava-12.0.1.jar
>> -rw-r--r-- 1 hadoop users 710492 Sep 17 23:29 guice-3.0.jar
>> -rw-r--r-- 1 hadoop users 65012 Sep 17 23:29 guice-servlet-3.0.jar
>> -rw-r--r-- 1 hadoop users 16778 Nov 20 17:39
>> hadoop-annotations-2.2.0.jar
>> -rw-r--r-- 1 hadoop users 49750 Nov 20 17:40 hadoop-auth-2.2.0.jar
>> -rw-r--r-- 1 hadoop users 2576 Oct 12 06:20
>> hadoop-client-2.1.0-beta.jar
>> -rw-r--r-- 1 hadoop users 2735584 Nov 20 17:50 hadoop-common-2.2.0.jar
>> -rw-r--r-- 1 hadoop users 5242252 Nov 21 08:48 hadoop-hdfs-2.2.0.jar
>> -rw-r--r-- 1 hadoop users 1988460 Nov 21 08:48
>> hadoop-hdfs-2.2.0-tests.jar
>> -rw-r--r-- 1 hadoop users 482042 Nov 21 08:49
>> hadoop-mapreduce-client-app-2.2.0.jar
>> -rw-r--r-- 1 hadoop users 656365 Nov 21 08:49
>> hadoop-mapreduce-client-common-2.2.0.jar
>> -rw-r--r-- 1 hadoop users 1455001 Nov 21 08:50
>> hadoop-mapreduce-client-core-2.2.0.jar
>> -rw-r--r-- 1 hadoop users 35216 Nov 21 08:50
>> hadoop-mapreduce-client-jobclient-2.2.0.jar
>> -rw-r--r-- 1 hadoop users 1434852 Nov 21 08:50
>> hadoop-mapreduce-client-jobclient-2.2.0-tests.jar
>> -rw-r--r-- 1 hadoop users 21537 Nov 21 08:51
>> hadoop-mapreduce-client-shuffle-2.2.0.jar
>> -rw-r--r-- 1 hadoop users 1158936 Nov 21 08:51 hadoop-yarn-api-2.2.0.jar
>> -rw-r--r-- 1 hadoop users 94728 Nov 21 08:51
>> hadoop-yarn-client-2.2.0.jar
>> -rw-r--r-- 1 hadoop users 1301627 Nov 21 08:51
>> hadoop-yarn-common-2.2.0.jar
>> -rw-r--r-- 1 hadoop users 175554 Nov 21 08:52
>> hadoop-yarn-server-common-2.2.0.jar
>> -rw-r--r-- 1 hadoop users 467638 Nov 21 08:52
>> hadoop-yarn-server-nodemanager-2.2.0.jar
>> -rw-r--r-- 1 hadoop users 825853 Oct 12 06:28
>> hbase-client-0.96.0-hadoop2.jar
>> -rw-r--r-- 1 hadoop users 354845 Oct 12 06:28
>> hbase-common-0.96.0-hadoop2.jar
>> -rw-r--r-- 1 hadoop users 132690 Oct 12 06:28
>> hbase-common-0.96.0-hadoop2-tests.jar
>> -rw-r--r-- 1 hadoop users 97428 Oct 12 06:28
>> hbase-examples-0.96.0-hadoop2.jar
>> -rw-r--r-- 1 hadoop users 72765 Oct 12 06:28
>> hbase-hadoop2-compat-0.96.0-hadoop2.jar
>> -rw-r--r-- 1 hadoop users 32096 Oct 12 06:28
>> hbase-hadoop-compat-0.96.0-hadoop2.jar
>> -rw-r--r-- 1 hadoop users 12174 Oct 12 06:28
>> hbase-it-0.96.0-hadoop2.jar
>> -rw-r--r-- 1 hadoop users 288784 Oct 12 06:28
>> hbase-it-0.96.0-hadoop2-tests.jar
>> -rw-r--r-- 1 hadoop users 94784 Oct 12 06:28
>> hbase-prefix-tree-0.96.0-hadoop2.jar
>> -rw-r--r-- 1 hadoop users 3134214 Oct 12 06:28
>> hbase-protocol-0.96.0-hadoop2.jar
>> -rw-r--r-- 1 hadoop users 3058804 Oct 12 06:28
>> hbase-server-0.96.0-hadoop2.jar
>> -rw-r--r-- 1 hadoop users 3150292 Oct 12 06:28
>> hbase-server-0.96.0-hadoop2-tests.jar
>> -rw-r--r-- 1 hadoop users 12554 Oct 12 06:28
>> hbase-shell-0.96.0-hadoop2.jar
>> -rw-r--r-- 1 hadoop users 10941 Oct 12 06:28
>> hbase-testing-util-0.96.0-hadoop2.jar
>> -rw-r--r-- 1 hadoop users 2276333 Oct 12 06:28
>> hbase-thrift-0.96.0-hadoop2.jar
>> -rw-r--r-- 1 hadoop users 95975 Sep 17 16:15 high-scale-lib-1.1.1.jar
>> -rw-r--r-- 1 hadoop users 31020 Sep 17 16:14 htrace-core-2.01.jar
>> -rw-r--r-- 1 hadoop users 352585 Sep 17 16:15 httpclient-4.1.3.jar
>> -rw-r--r-- 1 hadoop users 181201 Sep 17 16:15 httpcore-4.1.3.jar
>> -rw-r--r-- 1 hadoop users 227517 Sep 17 16:13 jackson-core-asl-1.8.8.jar
>> -rw-r--r-- 1 hadoop users 17884 Sep 17 16:13 jackson-jaxrs-1.8.8.jar
>> -rw-r--r-- 1 hadoop users 669065 Sep 17 16:13
>> jackson-mapper-asl-1.8.8.jar
>> -rw-r--r-- 1 hadoop users 32353 Sep 17 16:13 jackson-xc-1.8.8.jar
>> -rw-r--r-- 1 hadoop users 20642 Sep 17 16:15 jamon-runtime-2.3.1.jar
>> -rw-r--r-- 1 hadoop users 408133 Sep 17 16:13 jasper-compiler-5.5.23.jar
>> -rw-r--r-- 1 hadoop users 76844 Sep 17 16:13 jasper-runtime-5.5.23.jar
>> -rw-r--r-- 1 hadoop users 2497 Sep 17 23:29 javax.inject-1.jar
>> -rw-r--r-- 1 hadoop users 83586 Sep 17 23:29 javax.servlet-3.0.jar
>> -rw-r--r-- 1 hadoop users 105134 Sep 17 16:27 jaxb-api-2.2.2.jar
>> -rw-r--r-- 1 hadoop users 890168 Sep 17 16:13 jaxb-impl-2.2.3-1.jar
>> -rw-r--r-- 1 hadoop users 129217 Sep 17 23:29 jersey-client-1.8.jar
>> -rw-r--r-- 1 hadoop users 458233 Sep 17 16:13 jersey-core-1.8.jar
>> -rw-r--r-- 1 hadoop users 17585 Sep 17 23:29 jersey-grizzly2-1.8.jar
>> -rw-r--r-- 1 hadoop users 14712 Sep 17 23:29 jersey-guice-1.8.jar
>> -rw-r--r-- 1 hadoop users 147933 Sep 17 16:13 jersey-json-1.8.jar
>> -rw-r--r-- 1 hadoop users 694352 Sep 17 16:13 jersey-server-1.8.jar
>> -rw-r--r-- 1 hadoop users 28034 Sep 17 23:29
>> jersey-test-framework-core-1.8.jar
>> -rw-r--r-- 1 hadoop users 12907 Sep 17 23:29
>> jersey-test-framework-grizzly2-1.8.jar
>> -rw-r--r-- 1 hadoop users 321806 Sep 17 16:27 jets3t-0.6.1.jar
>> -rw-r--r-- 1 hadoop users 75963 Sep 17 16:13 jettison-1.3.1.jar
>> -rw-r--r-- 1 hadoop users 539912 Sep 17 16:13 jetty-6.1.26.jar
>> -rw-r--r-- 1 hadoop users 18891 Sep 17 16:15 jetty-sslengine-6.1.26.jar
>> -rw-r--r-- 1 hadoop users 177131 Sep 17 16:13 jetty-util-6.1.26.jar
>> -rw-r--r-- 1 hadoop users 13832273 Sep 17 16:15 jruby-complete-1.6.8.jar
>> -rw-r--r-- 1 hadoop users 185746 Sep 17 16:27 jsch-0.1.42.jar
>> -rw-r--r-- 1 hadoop users 1024680 Sep 17 16:13 jsp-2.1-6.1.14.jar
>> -rw-r--r-- 1 hadoop users 134910 Sep 17 16:13 jsp-api-2.1-6.1.14.jar
>> -rw-r--r-- 1 hadoop users 100636 Sep 17 16:27 jsp-api-2.1.jar
>> -rw-r--r-- 1 hadoop users 33015 Sep 17 16:13 jsr305-1.3.9.jar
>> -rw-r--r-- 1 hadoop users 245039 Sep 17 16:12 junit-4.11.jar
>> -rw-r--r-- 1 hadoop users 347531 Sep 17 16:15 libthrift-0.9.0.jar
>> -rw-r--r-- 1 hadoop users 489884 Sep 17 16:12 log4j-1.2.17.jar
>> -rw-r--r-- 1 hadoop users 42212 Sep 17 23:29
>> management-api-3.0.0-b012.jar
>> -rw-r--r-- 1 hadoop users 82445 Sep 17 16:14 metrics-core-2.1.2.jar
>> drwxr-xr-x 3 hadoop users 4096 Nov 21 09:10 native
>> -rw-r--r-- 1 hadoop users 1206119 Sep 18 04:00 netty-3.6.6.Final.jar
>> -rw-r--r-- 1 hadoop users 29555 Sep 17 16:27 paranamer-2.3.jar
>> -rw-r--r-- 1 hadoop users 533455 Sep 17 16:13 protobuf-java-2.5.0.jar
>> drwxr-xr-x 5 hadoop users 4096 Sep 28 10:37 ruby
>> -rw-r--r-- 1 hadoop users 132368 Sep 17 16:13 servlet-api-2.5-6.1.14.jar
>> -rw-r--r-- 1 hadoop users 105112 Sep 17 16:12 servlet-api-2.5.jar
>> -rw-r--r-- 1 hadoop users 25962 Sep 17 16:14 slf4j-api-1.6.4.jar
>> -rw-r--r-- 1 hadoop users 9748 Oct 3 07:15 slf4j-log4j12-1.6.4.jar
>> -rw-r--r-- 1 hadoop users 995720 Sep 17 16:27 snappy-java-1.0.3.2.jar
>> -rw-r--r-- 1 hadoop users 26514 Sep 17 16:13 stax-api-1.0.1.jar
>> -rw-r--r-- 1 hadoop users 15010 Sep 17 16:13 xmlenc-0.52.jar
>> -rw-r--r-- 1 hadoop users 94672 Sep 17 16:27 xz-1.0.jar
>> -rw-r--r-- 1 hadoop users 779974 Sep 17 16:14 zookeeper-3.4.5.jar
>>
>> Best regards,
>> Henry
>>
>> -----Original Message-----
>> From: MA11 YTHung1
>> Sent: Thursday, November 21, 2013 9:51 AM
>> To: user@hbase.apache.org
>> Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
>> called on non proxy.
>>
>> I'm using hadoop-2.2.0 stable
>>
>> -----Original Message-----
>> From: Jimmy Xiang [mailto:jxiang@cloudera.com]
>> Sent: Thursday, November 21, 2013 9:49 AM
>> To: user
>> Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
>> called on non proxy.
>>
>> Which version of Hadoop do you use?
>>
>>
>> On Wed, Nov 20, 2013 at 5:43 PM, Henry Hung <YT...@winbond.com> wrote:
>>
>> > Hi All,
>> >
>> > When stopping master or regionserver, I found some ERROR and WARN in
>> > the log files, are these errors can cause problem in hbase:
>> >
>> > 13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
>> > 13/11/21 09:35:36 ERROR ipc.RPC: RPC.stopProxy called on non proxy.
>> > java.lang.IllegalArgumentException: object is not an instance of
>> > declaring class
>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > at
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> > at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> > at java.lang.reflect.Method.invoke(Method.java:597)
>> > at
>> > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
>> > at $Proxy18.close(Unknown Source)
>> > at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:621)
>> > at
>> >
>> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
>> > at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
>> > at
>> >
>> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
>> > at
>> > org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
>> > at
>> >
>> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
>> > at
>> > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.j
>> > ava:54)
>> > 13/11/21 09:35:36 WARN util.ShutdownHookManager: ShutdownHook
>> > 'ClientFinalizer' failed,
>> org.apache.hadoop.HadoopIllegalArgumentException:
>> > Cannot close proxy - is not Closeable or does not provide closeable
>> > invocation handler class $Proxy18
>> > org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy -
>> > is not Closeable or does not provide closeable invocation handler
>> > class
>> > $Proxy18
>> > at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:639)
>> > at
>> >
>> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
>> > at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
>> > at
>> >
>> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
>> > at
>> > org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
>> > at
>> >
>> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
>> > at
>> > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.j
>> > ava:54)
>> >
>> > Best regards,
>> > Henry
>> >
>> > ________________________________
>> > The privileged confidential information contained in this email is
>> > intended for use only by the addressees as indicated by the original
>> > sender of this email. If you are not the addressee indicated in this
>> > email or are not responsible for delivery of the email to such a
>> > person, please kindly reply to the sender indicating this fact and
>> > delete all copies of it from your computer and network server
>> > immediately. Your cooperation is highly appreciated. It is advised
>> > that any unauthorized use of confidential information of Winbond is
>> > strictly prohibited; and any information in this email irrelevant to
>> > the official business of Winbond shall be deemed as neither given nor
>> endorsed by Winbond.
>> >
>>
>> The privileged confidential information contained in this email is
>> intended for use only by the addressees as indicated by the original sender
>> of this email. If you are not the addressee indicated in this email or are
>> not responsible for delivery of the email to such a person, please kindly
>> reply to the sender indicating this fact and delete all copies of it from
>> your computer and network server immediately. Your cooperation is highly
>> appreciated. It is advised that any unauthorized use of confidential
>> information of Winbond is strictly prohibited; and any information in this
>> email irrelevant to the official business of Winbond shall be deemed as
>> neither given nor endorsed by Winbond.
>>
>> The privileged confidential information contained in this email is
>> intended for use only by the addressees as indicated by the original sender
>> of this email. If you are not the addressee indicated in this email or are
>> not responsible for delivery of the email to such a person, please kindly
>> reply to the sender indicating this fact and delete all copies of it from
>> your computer and network server immediately. Your cooperation is highly
>> appreciated. It is advised that any unauthorized use of confidential
>> information of Winbond is strictly prohibited; and any information in this
>> email irrelevant to the official business of Winbond shall be deemed as
>> neither given nor endorsed by Winbond.
>>
>> The privileged confidential information contained in this email is
>> intended for use only by the addressees as indicated by the original sender
>> of this email. If you are not the addressee indicated in this email or are
>> not responsible for delivery of the email to such a person, please kindly
>> reply to the sender indicating this fact and delete all copies of it from
>> your computer and network server immediately. Your cooperation is highly
>> appreciated. It is advised that any unauthorized use of confidential
>> information of Winbond is strictly prohibited; and any information in this
>> email irrelevant to the official business of Winbond shall be deemed as
>> neither given nor endorsed by Winbond.
>>
>
>
RE: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
called on non proxy.
Posted by Henry Hung <YT...@winbond.com>.
@Ted.
After looking back to the previous email/thread, I actually forgot to mention that I'm using HA namenode with QJM...
I'm feeling bad for assuming that you already know about my environment, sorry.
Best regards,
Henry
-----Original Message-----
From: MA11 YTHung1
Sent: Tuesday, November 26, 2013 9:13 AM
To: user@hbase.apache.org
Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.
@Ted
Yes, I use the hadoop-hdfs-2.2.0.jar.
BTW, how do you certain that the namenode class is ClientNamenodeProtocolTranslatorPB?
>From the NameNodeProxies, I can only assume the ClientNamenodeProtocolTranslatorPB is used only when connecting to single hadoop namenode.
public static <T> ProxyAndInfo<T> createNonHAProxy(
Configuration conf, InetSocketAddress nnAddr, Class<T> xface,
UserGroupInformation ugi, boolean withRetries) throws IOException {
Text dtService = SecurityUtil.buildTokenService(nnAddr);
T proxy;
if (xface == ClientProtocol.class) {
proxy = (T) createNNProxyWithClientProtocol(nnAddr, conf, ugi,
withRetries);
But I'm using HA configuration using QJM, so the my guess is the createProxy will go to the HA case because I provide failoverProxyProviderClass with "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider".
public static <T> ProxyAndInfo<T> createProxy(Configuration conf,
URI nameNodeUri, Class<T> xface) throws IOException {
Class<FailoverProxyProvider<T>> failoverProxyProviderClass =
getFailoverProxyProviderClass(conf, nameNodeUri, xface);
if (failoverProxyProviderClass == null) {
// Non-HA case
return createNonHAProxy(conf, NameNode.getAddress(nameNodeUri), xface,
UserGroupInformation.getCurrentUser(), true);
} else {
// HA case
FailoverProxyProvider<T> failoverProxyProvider = NameNodeProxies
.createFailoverProxyProvider(conf, failoverProxyProviderClass, xface,
nameNodeUri);
Conf config = new Conf(conf);
T proxy = (T) RetryProxy.create(xface, failoverProxyProvider, RetryPolicies
.failoverOnNetworkException(RetryPolicies.TRY_ONCE_THEN_FAIL,
config.maxFailoverAttempts, config.failoverSleepBaseMillis,
config.failoverSleepMaxMillis));
Text dtService = HAUtil.buildTokenServiceForLogicalUri(nameNodeUri);
return new ProxyAndInfo<T>(proxy, dtService);
}
}
Here is the snippet of my hdfs-site.xml:
<property>
<name>dfs.nameservices</name>
<value>hadoopdev</value>
</property>
<property>
<name>dfs.ha.namenodes.hadoopdev</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.hadoopdev.nn1</name>
<value>fphd9.ctpilot1.com:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.hadoopdev.nn1</name>
<value>fphd9.ctpilot1.com:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.hadoopdev.nn2</name>
<value>fphd10.ctpilot1.com:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.hadoopdev.nn2</name>
<value>fphd10.ctpilot1.com:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://fphd8.ctpilot1.com:8485;fphd9.ctpilot1.com:8485;fphd10.ctpilot1.com:8485/hadoopdev</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.hadoopdev</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>shell(/bin/true)</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/data/hadoop/hadoop-data-2/journal</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>fphd1.ctpilot1.com:2222</value>
</property>
-----Original Message-----
From: Ted Yu [mailto:yuzhihong@gmail.com]
Sent: Tuesday, November 26, 2013 1:56 AM
To: user@hbase.apache.org
Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.
Here is the caller to createReorderingProxy():
ClientProtocol cp1 = createReorderingProxy(namenode, lrb, conf);
where namenode
is org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB :
public class ClientNamenodeProtocolTranslatorPB implements
ProtocolMetaInterface, ClientProtocol, Closeable, ProtocolTranslator {
In createReorderingProxy() :
new Class[]{ClientProtocol.class, Closeable.class},
We ask for Closeable interface.
Did the error persist after you replaced with the hadoop-hdfs-2.2.0.jar ?
Meaning, did you start HBase using the new hadoop jars ?
Cheers
On Mon, Nov 25, 2013 at 1:04 PM, Henry Hung <YT...@winbond.com> wrote:
> I looked into the source code of
> org/apache/hadoop/hbase/fs/HFileSystem.java
> and whenever I execute hbase-daemon.sh stop master (or regionserver),
> the
> method.getName() is "close",
> but the org/apache/hadoop/hdfs/protocol/ClientProtocol.java do not
> have method named "close", thus it result in error "object is not an
> instance of declaring class"
>
> Could someone that familiar with hbase-0.96.0 hadoop2 can tell me if
> this problem need to be fixed? And how to fix it?
>
> private static ClientProtocol createReorderingProxy(final
> ClientProtocol cp, final ReorderBlocks lrb, final Configuration conf) {
> return (ClientProtocol) Proxy.newProxyInstance
> (cp.getClass().getClassLoader(),
> new Class[]{ClientProtocol.class, Closeable.class},
> new InvocationHandler() {
> public Object invoke(Object proxy, Method method,
> Object[] args) throws Throwable {
> try {
> // method.invoke will failed if
> method.getName().equals("close")
> // because ClientProtocol do not have method "close"
> Object res = method.invoke(cp, args);
> if (res != null && args != null && args.length == 3
> && "getBlockLocations".equals(method.getName())
> && res instanceof LocatedBlocks
> && args[0] instanceof String
> && args[0] != null) {
> lrb.reorderBlocks(conf, (LocatedBlocks) res,
> (String) args[0]);
> }
> return res;
> } catch (InvocationTargetException ite) {
> // We will have this for all the exception, checked
> on not, sent
> // by any layer, including the functional exception
> Throwable cause = ite.getCause();
> if (cause == null){
> throw new RuntimeException(
> "Proxy invocation failed and getCause is null", ite);
> }
> if (cause instanceof UndeclaredThrowableException) {
> Throwable causeCause = cause.getCause();
> if (causeCause == null) {
> throw new
> RuntimeException("UndeclaredThrowableException had null cause!");
> }
> cause = cause.getCause();
> }
> throw cause;
> }
> }
> });
> }
>
>
>
> -----Original Message-----
> From: MA11 YTHung1
> Sent: Thursday, November 21, 2013 9:57 AM
> To: user@hbase.apache.org
> Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC:
> RPC.stopProxy called on non proxy.
>
> Additional information:
>
> I replace all files with prefix hadoop in hbase-0.96.0-hadoop2/lib
> with
> hadoop-2.2.0 libraries.
>
> the ls -l of hbase-0.96.0-hadoop2/lib as below:
>
> -rw-r--r-- 1 hadoop users 62983 Sep 17 16:13 activation-1.1.jar
> -rw-r--r-- 1 hadoop users 4467 Sep 17 23:29 aopalliance-1.0.jar
> -rw-r--r-- 1 hadoop users 43033 Sep 17 16:13 asm-3.1.jar
> -rw-r--r-- 1 hadoop users 263268 Sep 17 16:27 avro-1.5.3.jar
> -rw-r--r-- 1 hadoop users 188671 Sep 17 16:12 commons-beanutils-1.7.0.jar
> -rw-r--r-- 1 hadoop users 206035 Sep 17 16:13
> commons-beanutils-core-1.8.0.jar
> -rw-r--r-- 1 hadoop users 41123 Sep 17 16:12 commons-cli-1.2.jar
> -rw-r--r-- 1 hadoop users 259600 Sep 17 16:13 commons-codec-1.7.jar
> -rw-r--r-- 1 hadoop users 575389 Sep 17 16:12
> commons-collections-3.2.1.jar
> -rw-r--r-- 1 hadoop users 238681 Sep 17 16:27 commons-compress-1.4.jar
> -rw-r--r-- 1 hadoop users 298829 Sep 17 16:13
> commons-configuration-1.6.jar
> -rw-r--r-- 1 hadoop users 24239 Sep 17 23:28 commons-daemon-1.0.13.jar
> -rw-r--r-- 1 hadoop users 143602 Sep 17 16:12 commons-digester-1.8.jar
> -rw-r--r-- 1 hadoop users 112341 Sep 17 16:13 commons-el-1.0.jar
> -rw-r--r-- 1 hadoop users 305001 Sep 17 16:12 commons-httpclient-3.1.jar
> -rw-r--r-- 1 hadoop users 185140 Sep 17 16:13 commons-io-2.4.jar
> -rw-r--r-- 1 hadoop users 284220 Sep 17 16:12 commons-lang-2.6.jar
> -rw-r--r-- 1 hadoop users 60686 Sep 17 16:12 commons-logging-1.1.1.jar
> -rw-r--r-- 1 hadoop users 988514 Sep 17 16:13 commons-math-2.2.jar
> -rw-r--r-- 1 hadoop users 273370 Sep 17 16:27 commons-net-3.1.jar
> -rw-r--r-- 1 hadoop users 3566844 Sep 17 16:15 core-3.1.1.jar
> -rw-r--r-- 1 hadoop users 15322 Sep 17 16:12
> findbugs-annotations-1.3.9-1.jar
> -rw-r--r-- 1 hadoop users 21817 Sep 17 23:29
> gmbal-api-only-3.0.0-b023.jar
> -rw-r--r-- 1 hadoop users 684337 Sep 17 23:29 grizzly-framework-2.1.1.jar
> -rw-r--r-- 1 hadoop users 210846 Sep 17 23:29
> grizzly-framework-2.1.1-tests.jar
> -rw-r--r-- 1 hadoop users 248346 Sep 17 23:29 grizzly-http-2.1.1.jar
> -rw-r--r-- 1 hadoop users 193583 Sep 17 23:29
> grizzly-http-server-2.1.1.jar
> -rw-r--r-- 1 hadoop users 336878 Sep 17 23:29
> grizzly-http-servlet-2.1.1.jar
> -rw-r--r-- 1 hadoop users 8072 Sep 17 23:29 grizzly-rcm-2.1.1.jar
> -rw-r--r-- 1 hadoop users 1795932 Sep 17 16:13 guava-12.0.1.jar
> -rw-r--r-- 1 hadoop users 710492 Sep 17 23:29 guice-3.0.jar
> -rw-r--r-- 1 hadoop users 65012 Sep 17 23:29 guice-servlet-3.0.jar
> -rw-r--r-- 1 hadoop users 16778 Nov 20 17:39
> hadoop-annotations-2.2.0.jar
> -rw-r--r-- 1 hadoop users 49750 Nov 20 17:40 hadoop-auth-2.2.0.jar
> -rw-r--r-- 1 hadoop users 2576 Oct 12 06:20
> hadoop-client-2.1.0-beta.jar
> -rw-r--r-- 1 hadoop users 2735584 Nov 20 17:50
> hadoop-common-2.2.0.jar
> -rw-r--r-- 1 hadoop users 5242252 Nov 21 08:48 hadoop-hdfs-2.2.0.jar
> -rw-r--r-- 1 hadoop users 1988460 Nov 21 08:48 hadoop-hdfs-2.2.0-tests.jar
> -rw-r--r-- 1 hadoop users 482042 Nov 21 08:49
> hadoop-mapreduce-client-app-2.2.0.jar
> -rw-r--r-- 1 hadoop users 656365 Nov 21 08:49
> hadoop-mapreduce-client-common-2.2.0.jar
> -rw-r--r-- 1 hadoop users 1455001 Nov 21 08:50
> hadoop-mapreduce-client-core-2.2.0.jar
> -rw-r--r-- 1 hadoop users 35216 Nov 21 08:50
> hadoop-mapreduce-client-jobclient-2.2.0.jar
> -rw-r--r-- 1 hadoop users 1434852 Nov 21 08:50
> hadoop-mapreduce-client-jobclient-2.2.0-tests.jar
> -rw-r--r-- 1 hadoop users 21537 Nov 21 08:51
> hadoop-mapreduce-client-shuffle-2.2.0.jar
> -rw-r--r-- 1 hadoop users 1158936 Nov 21 08:51 hadoop-yarn-api-2.2.0.jar
> -rw-r--r-- 1 hadoop users 94728 Nov 21 08:51
> hadoop-yarn-client-2.2.0.jar
> -rw-r--r-- 1 hadoop users 1301627 Nov 21 08:51
> hadoop-yarn-common-2.2.0.jar
> -rw-r--r-- 1 hadoop users 175554 Nov 21 08:52
> hadoop-yarn-server-common-2.2.0.jar
> -rw-r--r-- 1 hadoop users 467638 Nov 21 08:52
> hadoop-yarn-server-nodemanager-2.2.0.jar
> -rw-r--r-- 1 hadoop users 825853 Oct 12 06:28
> hbase-client-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 354845 Oct 12 06:28
> hbase-common-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 132690 Oct 12 06:28
> hbase-common-0.96.0-hadoop2-tests.jar
> -rw-r--r-- 1 hadoop users 97428 Oct 12 06:28
> hbase-examples-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 72765 Oct 12 06:28
> hbase-hadoop2-compat-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 32096 Oct 12 06:28
> hbase-hadoop-compat-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 12174 Oct 12 06:28 hbase-it-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 288784 Oct 12 06:28
> hbase-it-0.96.0-hadoop2-tests.jar
> -rw-r--r-- 1 hadoop users 94784 Oct 12 06:28
> hbase-prefix-tree-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 3134214 Oct 12 06:28
> hbase-protocol-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 3058804 Oct 12 06:28
> hbase-server-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 3150292 Oct 12 06:28
> hbase-server-0.96.0-hadoop2-tests.jar
> -rw-r--r-- 1 hadoop users 12554 Oct 12 06:28
> hbase-shell-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 10941 Oct 12 06:28
> hbase-testing-util-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 2276333 Oct 12 06:28
> hbase-thrift-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 95975 Sep 17 16:15 high-scale-lib-1.1.1.jar
> -rw-r--r-- 1 hadoop users 31020 Sep 17 16:14 htrace-core-2.01.jar
> -rw-r--r-- 1 hadoop users 352585 Sep 17 16:15 httpclient-4.1.3.jar
> -rw-r--r-- 1 hadoop users 181201 Sep 17 16:15 httpcore-4.1.3.jar
> -rw-r--r-- 1 hadoop users 227517 Sep 17 16:13 jackson-core-asl-1.8.8.jar
> -rw-r--r-- 1 hadoop users 17884 Sep 17 16:13 jackson-jaxrs-1.8.8.jar
> -rw-r--r-- 1 hadoop users 669065 Sep 17 16:13
> jackson-mapper-asl-1.8.8.jar
> -rw-r--r-- 1 hadoop users 32353 Sep 17 16:13 jackson-xc-1.8.8.jar
> -rw-r--r-- 1 hadoop users 20642 Sep 17 16:15 jamon-runtime-2.3.1.jar
> -rw-r--r-- 1 hadoop users 408133 Sep 17 16:13 jasper-compiler-5.5.23.jar
> -rw-r--r-- 1 hadoop users 76844 Sep 17 16:13 jasper-runtime-5.5.23.jar
> -rw-r--r-- 1 hadoop users 2497 Sep 17 23:29 javax.inject-1.jar
> -rw-r--r-- 1 hadoop users 83586 Sep 17 23:29 javax.servlet-3.0.jar
> -rw-r--r-- 1 hadoop users 105134 Sep 17 16:27 jaxb-api-2.2.2.jar
> -rw-r--r-- 1 hadoop users 890168 Sep 17 16:13 jaxb-impl-2.2.3-1.jar
> -rw-r--r-- 1 hadoop users 129217 Sep 17 23:29 jersey-client-1.8.jar
> -rw-r--r-- 1 hadoop users 458233 Sep 17 16:13 jersey-core-1.8.jar
> -rw-r--r-- 1 hadoop users 17585 Sep 17 23:29 jersey-grizzly2-1.8.jar
> -rw-r--r-- 1 hadoop users 14712 Sep 17 23:29 jersey-guice-1.8.jar
> -rw-r--r-- 1 hadoop users 147933 Sep 17 16:13 jersey-json-1.8.jar
> -rw-r--r-- 1 hadoop users 694352 Sep 17 16:13 jersey-server-1.8.jar
> -rw-r--r-- 1 hadoop users 28034 Sep 17 23:29
> jersey-test-framework-core-1.8.jar
> -rw-r--r-- 1 hadoop users 12907 Sep 17 23:29
> jersey-test-framework-grizzly2-1.8.jar
> -rw-r--r-- 1 hadoop users 321806 Sep 17 16:27 jets3t-0.6.1.jar
> -rw-r--r-- 1 hadoop users 75963 Sep 17 16:13 jettison-1.3.1.jar
> -rw-r--r-- 1 hadoop users 539912 Sep 17 16:13 jetty-6.1.26.jar
> -rw-r--r-- 1 hadoop users 18891 Sep 17 16:15 jetty-sslengine-6.1.26.jar
> -rw-r--r-- 1 hadoop users 177131 Sep 17 16:13 jetty-util-6.1.26.jar
> -rw-r--r-- 1 hadoop users 13832273 Sep 17 16:15 jruby-complete-1.6.8.jar
> -rw-r--r-- 1 hadoop users 185746 Sep 17 16:27 jsch-0.1.42.jar
> -rw-r--r-- 1 hadoop users 1024680 Sep 17 16:13 jsp-2.1-6.1.14.jar
> -rw-r--r-- 1 hadoop users 134910 Sep 17 16:13 jsp-api-2.1-6.1.14.jar
> -rw-r--r-- 1 hadoop users 100636 Sep 17 16:27 jsp-api-2.1.jar
> -rw-r--r-- 1 hadoop users 33015 Sep 17 16:13 jsr305-1.3.9.jar
> -rw-r--r-- 1 hadoop users 245039 Sep 17 16:12 junit-4.11.jar
> -rw-r--r-- 1 hadoop users 347531 Sep 17 16:15 libthrift-0.9.0.jar
> -rw-r--r-- 1 hadoop users 489884 Sep 17 16:12 log4j-1.2.17.jar
> -rw-r--r-- 1 hadoop users 42212 Sep 17 23:29
> management-api-3.0.0-b012.jar
> -rw-r--r-- 1 hadoop users 82445 Sep 17 16:14 metrics-core-2.1.2.jar
> drwxr-xr-x 3 hadoop users 4096 Nov 21 09:10 native
> -rw-r--r-- 1 hadoop users 1206119 Sep 18 04:00 netty-3.6.6.Final.jar
> -rw-r--r-- 1 hadoop users 29555 Sep 17 16:27 paranamer-2.3.jar
> -rw-r--r-- 1 hadoop users 533455 Sep 17 16:13 protobuf-java-2.5.0.jar
> drwxr-xr-x 5 hadoop users 4096 Sep 28 10:37 ruby
> -rw-r--r-- 1 hadoop users 132368 Sep 17 16:13 servlet-api-2.5-6.1.14.jar
> -rw-r--r-- 1 hadoop users 105112 Sep 17 16:12 servlet-api-2.5.jar
> -rw-r--r-- 1 hadoop users 25962 Sep 17 16:14 slf4j-api-1.6.4.jar
> -rw-r--r-- 1 hadoop users 9748 Oct 3 07:15 slf4j-log4j12-1.6.4.jar
> -rw-r--r-- 1 hadoop users 995720 Sep 17 16:27 snappy-java-1.0.3.2.jar
> -rw-r--r-- 1 hadoop users 26514 Sep 17 16:13 stax-api-1.0.1.jar
> -rw-r--r-- 1 hadoop users 15010 Sep 17 16:13 xmlenc-0.52.jar
> -rw-r--r-- 1 hadoop users 94672 Sep 17 16:27 xz-1.0.jar
> -rw-r--r-- 1 hadoop users 779974 Sep 17 16:14 zookeeper-3.4.5.jar
>
> Best regards,
> Henry
>
> -----Original Message-----
> From: MA11 YTHung1
> Sent: Thursday, November 21, 2013 9:51 AM
> To: user@hbase.apache.org
> Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC:
> RPC.stopProxy called on non proxy.
>
> I'm using hadoop-2.2.0 stable
>
> -----Original Message-----
> From: Jimmy Xiang [mailto:jxiang@cloudera.com]
> Sent: Thursday, November 21, 2013 9:49 AM
> To: user
> Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC:
> RPC.stopProxy called on non proxy.
>
> Which version of Hadoop do you use?
>
>
> On Wed, Nov 20, 2013 at 5:43 PM, Henry Hung <YT...@winbond.com> wrote:
>
> > Hi All,
> >
> > When stopping master or regionserver, I found some ERROR and WARN in
> > the log files, are these errors can cause problem in hbase:
> >
> > 13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
> > 13/11/21 09:35:36 ERROR ipc.RPC: RPC.stopProxy called on non proxy.
> > java.lang.IllegalArgumentException: object is not an instance of
> > declaring class
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> ava:39)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess
> orImpl.java:25)
> > at java.lang.reflect.Method.invoke(Method.java:597)
> > at
> > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
> > at $Proxy18.close(Unknown Source)
> > at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:621)
> > at
> >
> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.j
> ava:738)
> > at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> > at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSyst
> em.java:847)
> > at
> > org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> > at
> >
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.j
> ava:2541)
> > at
> > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager
> > .j
> > ava:54)
> > 13/11/21 09:35:36 WARN util.ShutdownHookManager: ShutdownHook
> > 'ClientFinalizer' failed,
> org.apache.hadoop.HadoopIllegalArgumentException:
> > Cannot close proxy - is not Closeable or does not provide closeable
> > invocation handler class $Proxy18
> > org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy
> > - is not Closeable or does not provide closeable invocation handler
> > class
> > $Proxy18
> > at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:639)
> > at
> >
> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.j
> ava:738)
> > at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> > at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSyst
> em.java:847)
> > at
> > org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> > at
> >
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.j
> ava:2541)
> > at
> > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager
> > .j
> > ava:54)
> >
> > Best regards,
> > Henry
> >
> > ________________________________
> > The privileged confidential information contained in this email is
> > intended for use only by the addressees as indicated by the original
> > sender of this email. If you are not the addressee indicated in this
> > email or are not responsible for delivery of the email to such a
> > person, please kindly reply to the sender indicating this fact and
> > delete all copies of it from your computer and network server
> > immediately. Your cooperation is highly appreciated. It is advised
> > that any unauthorized use of confidential information of Winbond is
> > strictly prohibited; and any information in this email irrelevant to
> > the official business of Winbond shall be deemed as neither given
> > nor
> endorsed by Winbond.
> >
>
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original
> sender of this email. If you are not the addressee indicated in this
> email or are not responsible for delivery of the email to such a
> person, please kindly reply to the sender indicating this fact and
> delete all copies of it from your computer and network server
> immediately. Your cooperation is highly appreciated. It is advised
> that any unauthorized use of confidential information of Winbond is
> strictly prohibited; and any information in this email irrelevant to
> the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
>
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original
> sender of this email. If you are not the addressee indicated in this
> email or are not responsible for delivery of the email to such a
> person, please kindly reply to the sender indicating this fact and
> delete all copies of it from your computer and network server
> immediately. Your cooperation is highly appreciated. It is advised
> that any unauthorized use of confidential information of Winbond is
> strictly prohibited; and any information in this email irrelevant to
> the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
>
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original
> sender of this email. If you are not the addressee indicated in this
> email or are not responsible for delivery of the email to such a
> person, please kindly reply to the sender indicating this fact and
> delete all copies of it from your computer and network server
> immediately. Your cooperation is highly appreciated. It is advised
> that any unauthorized use of confidential information of Winbond is
> strictly prohibited; and any information in this email irrelevant to
> the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
>
The privileged confidential information contained in this email is intended for use only by the addressees as indicated by the original sender of this email. If you are not the addressee indicated in this email or are not responsible for delivery of the email to such a person, please kindly reply to the sender indicating this fact and delete all copies of it from your computer and network server immediately. Your cooperation is highly appreciated. It is advised that any unauthorized use of confidential information of Winbond is strictly prohibited; and any information in this email irrelevant to the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
The privileged confidential information contained in this email is intended for use only by the addressees as indicated by the original sender of this email. If you are not the addressee indicated in this email or are not responsible for delivery of the email to such a person, please kindly reply to the sender indicating this fact and delete all copies of it from your computer and network server immediately. Your cooperation is highly appreciated. It is advised that any unauthorized use of confidential information of Winbond is strictly prohibited; and any information in this email irrelevant to the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
called on non proxy.
Posted by Ted Yu <yu...@gmail.com>.
The following JIRA has been integrated to branch 2.2 :
HADOOP-10132 RPC#stopProxy() should log the class of proxy when
IllegalArgumentException is encountered
FYI
On Mon, Nov 25, 2013 at 9:56 PM, Ted Yu <yu...@gmail.com> wrote:
> Update:
> Henry tried my patch attached to HBASE-10029
>
> From master log, it seems my patch worked.
>
> I will get back to this thread after further testing / code review.
>
> Cheers
>
> On Nov 25, 2013, at 6:05 PM, Henry Hung <YT...@winbond.com> wrote:
>
> > @Ted:
> >
> > I create the JIRA, is the information sufficient?
> > https://issues.apache.org/jira/browse/HBASE-10029
> >
> > Best regards,
> > Henry
> >
> > -----Original Message-----
> > From: Ted Yu [mailto:yuzhihong@gmail.com]
> > Sent: Tuesday, November 26, 2013 9:30 AM
> > To: user@hbase.apache.org
> > Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
> called on non proxy.
> >
> > Henry:
> > Thanks for the additional information.
> >
> > Looks like HA namenode with QJM is not covered by current code.
> >
> > Mind filing a JIRA with summary of this thread ?
> >
> > Cheers
> >
> >
> > On Tue, Nov 26, 2013 at 9:12 AM, Henry Hung <YT...@winbond.com> wrote:
> >
> >> @Ted
> >> Yes, I use the hadoop-hdfs-2.2.0.jar.
> >>
> >> BTW, how do you certain that the namenode class is
> >> ClientNamenodeProtocolTranslatorPB?
> >>
> >> From the NameNodeProxies, I can only assume the
> >> ClientNamenodeProtocolTranslatorPB is used only when connecting to
> >> single hadoop namenode.
> >>
> >> public static <T> ProxyAndInfo<T> createNonHAProxy(
> >> Configuration conf, InetSocketAddress nnAddr, Class<T> xface,
> >> UserGroupInformation ugi, boolean withRetries) throws IOException {
> >> Text dtService = SecurityUtil.buildTokenService(nnAddr);
> >>
> >> T proxy;
> >> if (xface == ClientProtocol.class) {
> >> proxy = (T) createNNProxyWithClientProtocol(nnAddr, conf, ugi,
> >> withRetries);
> >>
> >>
> >> But I'm using HA configuration using QJM, so the my guess is the
> >> createProxy will go to the HA case because I provide
> >> failoverProxyProviderClass with
> >>
> "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider".
> >>
> >> public static <T> ProxyAndInfo<T> createProxy(Configuration conf,
> >> URI nameNodeUri, Class<T> xface) throws IOException {
> >> Class<FailoverProxyProvider<T>> failoverProxyProviderClass =
> >> getFailoverProxyProviderClass(conf, nameNodeUri, xface);
> >>
> >> if (failoverProxyProviderClass == null) {
> >> // Non-HA case
> >> return createNonHAProxy(conf, NameNode.getAddress(nameNodeUri),
> >> xface,
> >> UserGroupInformation.getCurrentUser(), true);
> >> } else {
> >> // HA case
> >> FailoverProxyProvider<T> failoverProxyProvider = NameNodeProxies
> >> .createFailoverProxyProvider(conf,
> >> failoverProxyProviderClass, xface,
> >> nameNodeUri);
> >> Conf config = new Conf(conf);
> >> T proxy = (T) RetryProxy.create(xface, failoverProxyProvider,
> >> RetryPolicies
> >> .failoverOnNetworkException(RetryPolicies.TRY_ONCE_THEN_FAIL,
> >> config.maxFailoverAttempts, config.failoverSleepBaseMillis,
> >> config.failoverSleepMaxMillis));
> >>
> >> Text dtService =
> HAUtil.buildTokenServiceForLogicalUri(nameNodeUri);
> >> return new ProxyAndInfo<T>(proxy, dtService);
> >> }
> >> }
> >>
> >> Here is the snippet of my hdfs-site.xml:
> >>
> >> <property>
> >> <name>dfs.nameservices</name>
> >> <value>hadoopdev</value>
> >> </property>
> >> <property>
> >> <name>dfs.ha.namenodes.hadoopdev</name>
> >> <value>nn1,nn2</value>
> >> </property>
> >> <property>
> >> <name>dfs.namenode.rpc-address.hadoopdev.nn1</name>
> >> <value>fphd9.ctpilot1.com:9000</value>
> >> </property>
> >> <property>
> >> <name>dfs.namenode.http-address.hadoopdev.nn1</name>
> >> <value>fphd9.ctpilot1.com:50070</value>
> >> </property>
> >> <property>
> >> <name>dfs.namenode.rpc-address.hadoopdev.nn2</name>
> >> <value>fphd10.ctpilot1.com:9000</value>
> >> </property>
> >> <property>
> >> <name>dfs.namenode.http-address.hadoopdev.nn2</name>
> >> <value>fphd10.ctpilot1.com:50070</value>
> >> </property>
> >> <property>
> >> <name>dfs.namenode.shared.edits.dir</name>
> >> <value>qjournal://fphd8.ctpilot1.com:8485;fphd9.ctpilot1.com:8485;
> >> fphd10.ctpilot1.com:8485/hadoopdev</value>
> >> </property>
> >> <property>
> >> <name>dfs.client.failover.proxy.provider.hadoopdev</name>
> >>
> >>
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
> >> </property>
> >> <property>
> >> <name>dfs.ha.fencing.methods</name>
> >> <value>shell(/bin/true)</value>
> >> </property>
> >> <property>
> >> <name>dfs.journalnode.edits.dir</name>
> >> <value>/data/hadoop/hadoop-data-2/journal</value>
> >> </property>
> >> <property>
> >> <name>dfs.ha.automatic-failover.enabled</name>
> >> <value>true</value>
> >> </property>
> >> <property>
> >> <name>ha.zookeeper.quorum</name>
> >> <value>fphd1.ctpilot1.com:2222</value>
> >> </property>
> >>
> >> -----Original Message-----
> >> From: Ted Yu [mailto:yuzhihong@gmail.com]
> >> Sent: Tuesday, November 26, 2013 1:56 AM
> >> To: user@hbase.apache.org
> >> Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC:
> >> RPC.stopProxy called on non proxy.
> >>
> >> Here is the caller to createReorderingProxy():
> >>
> >> ClientProtocol cp1 = createReorderingProxy(namenode, lrb, conf);
> >>
> >> where namenode
> >> is org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB
> :
> >>
> >> public class ClientNamenodeProtocolTranslatorPB implements
> >>
> >> ProtocolMetaInterface, ClientProtocol, Closeable,
> >> ProtocolTranslator {
> >>
> >> In createReorderingProxy() :
> >>
> >> new Class[]{ClientProtocol.class, Closeable.class},
> >>
> >> We ask for Closeable interface.
> >>
> >>
> >> Did the error persist after you replaced with the hadoop-hdfs-2.2.0.jar
> ?
> >> Meaning, did you start HBase using the new hadoop jars ?
> >>
> >> Cheers
> >>
> >>
> >> On Mon, Nov 25, 2013 at 1:04 PM, Henry Hung <YT...@winbond.com>
> wrote:
> >>
> >>> I looked into the source code of
> >>> org/apache/hadoop/hbase/fs/HFileSystem.java
> >>> and whenever I execute hbase-daemon.sh stop master (or
> >>> regionserver), the
> >>> method.getName() is "close",
> >>> but the org/apache/hadoop/hdfs/protocol/ClientProtocol.java do not
> >>> have method named "close", thus it result in error "object is not an
> >>> instance of declaring class"
> >>>
> >>> Could someone that familiar with hbase-0.96.0 hadoop2 can tell me if
> >>> this problem need to be fixed? And how to fix it?
> >>>
> >>> private static ClientProtocol createReorderingProxy(final
> >>> ClientProtocol cp, final ReorderBlocks lrb, final Configuration conf) {
> >>> return (ClientProtocol) Proxy.newProxyInstance
> >>> (cp.getClass().getClassLoader(),
> >>> new Class[]{ClientProtocol.class, Closeable.class},
> >>> new InvocationHandler() {
> >>> public Object invoke(Object proxy, Method method,
> >>> Object[] args) throws Throwable {
> >>> try {
> >>> // method.invoke will failed if
> >>> method.getName().equals("close")
> >>> // because ClientProtocol do not have method "close"
> >>> Object res = method.invoke(cp, args);
> >>> if (res != null && args != null && args.length == 3
> >>> && "getBlockLocations".equals(method.getName())
> >>> && res instanceof LocatedBlocks
> >>> && args[0] instanceof String
> >>> && args[0] != null) {
> >>> lrb.reorderBlocks(conf, (LocatedBlocks) res,
> >>> (String) args[0]);
> >>> }
> >>> return res;
> >>> } catch (InvocationTargetException ite) {
> >>> // We will have this for all the exception,
> >>> checked on not, sent
> >>> // by any layer, including the functional exception
> >>> Throwable cause = ite.getCause();
> >>> if (cause == null){
> >>> throw new RuntimeException(
> >>> "Proxy invocation failed and getCause is
> >>> null",
> >> ite);
> >>> }
> >>> if (cause instanceof UndeclaredThrowableException) {
> >>> Throwable causeCause = cause.getCause();
> >>> if (causeCause == null) {
> >>> throw new
> >>> RuntimeException("UndeclaredThrowableException had null cause!");
> >>> }
> >>> cause = cause.getCause();
> >>> }
> >>> throw cause;
> >>> }
> >>> }
> >>> });
> >>> }
> >>>
> >>>
> >>>
> >>> -----Original Message-----
> >>> From: MA11 YTHung1
> >>> Sent: Thursday, November 21, 2013 9:57 AM
> >>> To: user@hbase.apache.org
> >>> Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC:
> >>> RPC.stopProxy called on non proxy.
> >>>
> >>> Additional information:
> >>>
> >>> I replace all files with prefix hadoop in hbase-0.96.0-hadoop2/lib
> >>> with
> >>> hadoop-2.2.0 libraries.
> >>>
> >>> the ls -l of hbase-0.96.0-hadoop2/lib as below:
> >>>
> >>> -rw-r--r-- 1 hadoop users 62983 Sep 17 16:13 activation-1.1.jar
> >>> -rw-r--r-- 1 hadoop users 4467 Sep 17 23:29 aopalliance-1.0.jar
> >>> -rw-r--r-- 1 hadoop users 43033 Sep 17 16:13 asm-3.1.jar
> >>> -rw-r--r-- 1 hadoop users 263268 Sep 17 16:27 avro-1.5.3.jar
> >>> -rw-r--r-- 1 hadoop users 188671 Sep 17 16:12
> >> commons-beanutils-1.7.0.jar
> >>> -rw-r--r-- 1 hadoop users 206035 Sep 17 16:13
> >>> commons-beanutils-core-1.8.0.jar
> >>> -rw-r--r-- 1 hadoop users 41123 Sep 17 16:12 commons-cli-1.2.jar
> >>> -rw-r--r-- 1 hadoop users 259600 Sep 17 16:13 commons-codec-1.7.jar
> >>> -rw-r--r-- 1 hadoop users 575389 Sep 17 16:12
> >>> commons-collections-3.2.1.jar
> >>> -rw-r--r-- 1 hadoop users 238681 Sep 17 16:27
> commons-compress-1.4.jar
> >>> -rw-r--r-- 1 hadoop users 298829 Sep 17 16:13
> >>> commons-configuration-1.6.jar
> >>> -rw-r--r-- 1 hadoop users 24239 Sep 17 23:28
> commons-daemon-1.0.13.jar
> >>> -rw-r--r-- 1 hadoop users 143602 Sep 17 16:12
> commons-digester-1.8.jar
> >>> -rw-r--r-- 1 hadoop users 112341 Sep 17 16:13 commons-el-1.0.jar
> >>> -rw-r--r-- 1 hadoop users 305001 Sep 17 16:12
> >> commons-httpclient-3.1.jar
> >>> -rw-r--r-- 1 hadoop users 185140 Sep 17 16:13 commons-io-2.4.jar
> >>> -rw-r--r-- 1 hadoop users 284220 Sep 17 16:12 commons-lang-2.6.jar
> >>> -rw-r--r-- 1 hadoop users 60686 Sep 17 16:12
> commons-logging-1.1.1.jar
> >>> -rw-r--r-- 1 hadoop users 988514 Sep 17 16:13 commons-math-2.2.jar
> >>> -rw-r--r-- 1 hadoop users 273370 Sep 17 16:27 commons-net-3.1.jar
> >>> -rw-r--r-- 1 hadoop users 3566844 Sep 17 16:15 core-3.1.1.jar
> >>> -rw-r--r-- 1 hadoop users 15322 Sep 17 16:12
> >>> findbugs-annotations-1.3.9-1.jar
> >>> -rw-r--r-- 1 hadoop users 21817 Sep 17 23:29
> >>> gmbal-api-only-3.0.0-b023.jar
> >>> -rw-r--r-- 1 hadoop users 684337 Sep 17 23:29
> >> grizzly-framework-2.1.1.jar
> >>> -rw-r--r-- 1 hadoop users 210846 Sep 17 23:29
> >>> grizzly-framework-2.1.1-tests.jar
> >>> -rw-r--r-- 1 hadoop users 248346 Sep 17 23:29 grizzly-http-2.1.1.jar
> >>> -rw-r--r-- 1 hadoop users 193583 Sep 17 23:29
> >>> grizzly-http-server-2.1.1.jar
> >>> -rw-r--r-- 1 hadoop users 336878 Sep 17 23:29
> >>> grizzly-http-servlet-2.1.1.jar
> >>> -rw-r--r-- 1 hadoop users 8072 Sep 17 23:29 grizzly-rcm-2.1.1.jar
> >>> -rw-r--r-- 1 hadoop users 1795932 Sep 17 16:13 guava-12.0.1.jar
> >>> -rw-r--r-- 1 hadoop users 710492 Sep 17 23:29 guice-3.0.jar
> >>> -rw-r--r-- 1 hadoop users 65012 Sep 17 23:29 guice-servlet-3.0.jar
> >>> -rw-r--r-- 1 hadoop users 16778 Nov 20 17:39
> >>> hadoop-annotations-2.2.0.jar
> >>> -rw-r--r-- 1 hadoop users 49750 Nov 20 17:40 hadoop-auth-2.2.0.jar
> >>> -rw-r--r-- 1 hadoop users 2576 Oct 12 06:20
> >>> hadoop-client-2.1.0-beta.jar
> >>> -rw-r--r-- 1 hadoop users 2735584 Nov 20 17:50
> >>> hadoop-common-2.2.0.jar
> >>> -rw-r--r-- 1 hadoop users 5242252 Nov 21 08:48
> >>> hadoop-hdfs-2.2.0.jar
> >>> -rw-r--r-- 1 hadoop users 1988460 Nov 21 08:48
> >> hadoop-hdfs-2.2.0-tests.jar
> >>> -rw-r--r-- 1 hadoop users 482042 Nov 21 08:49
> >>> hadoop-mapreduce-client-app-2.2.0.jar
> >>> -rw-r--r-- 1 hadoop users 656365 Nov 21 08:49
> >>> hadoop-mapreduce-client-common-2.2.0.jar
> >>> -rw-r--r-- 1 hadoop users 1455001 Nov 21 08:50
> >>> hadoop-mapreduce-client-core-2.2.0.jar
> >>> -rw-r--r-- 1 hadoop users 35216 Nov 21 08:50
> >>> hadoop-mapreduce-client-jobclient-2.2.0.jar
> >>> -rw-r--r-- 1 hadoop users 1434852 Nov 21 08:50
> >>> hadoop-mapreduce-client-jobclient-2.2.0-tests.jar
> >>> -rw-r--r-- 1 hadoop users 21537 Nov 21 08:51
> >>> hadoop-mapreduce-client-shuffle-2.2.0.jar
> >>> -rw-r--r-- 1 hadoop users 1158936 Nov 21 08:51
> hadoop-yarn-api-2.2.0.jar
> >>> -rw-r--r-- 1 hadoop users 94728 Nov 21 08:51
> >>> hadoop-yarn-client-2.2.0.jar
> >>> -rw-r--r-- 1 hadoop users 1301627 Nov 21 08:51
> >>> hadoop-yarn-common-2.2.0.jar
> >>> -rw-r--r-- 1 hadoop users 175554 Nov 21 08:52
> >>> hadoop-yarn-server-common-2.2.0.jar
> >>> -rw-r--r-- 1 hadoop users 467638 Nov 21 08:52
> >>> hadoop-yarn-server-nodemanager-2.2.0.jar
> >>> -rw-r--r-- 1 hadoop users 825853 Oct 12 06:28
> >>> hbase-client-0.96.0-hadoop2.jar
> >>> -rw-r--r-- 1 hadoop users 354845 Oct 12 06:28
> >>> hbase-common-0.96.0-hadoop2.jar
> >>> -rw-r--r-- 1 hadoop users 132690 Oct 12 06:28
> >>> hbase-common-0.96.0-hadoop2-tests.jar
> >>> -rw-r--r-- 1 hadoop users 97428 Oct 12 06:28
> >>> hbase-examples-0.96.0-hadoop2.jar
> >>> -rw-r--r-- 1 hadoop users 72765 Oct 12 06:28
> >>> hbase-hadoop2-compat-0.96.0-hadoop2.jar
> >>> -rw-r--r-- 1 hadoop users 32096 Oct 12 06:28
> >>> hbase-hadoop-compat-0.96.0-hadoop2.jar
> >>> -rw-r--r-- 1 hadoop users 12174 Oct 12 06:28
> >> hbase-it-0.96.0-hadoop2.jar
> >>> -rw-r--r-- 1 hadoop users 288784 Oct 12 06:28
> >>> hbase-it-0.96.0-hadoop2-tests.jar
> >>> -rw-r--r-- 1 hadoop users 94784 Oct 12 06:28
> >>> hbase-prefix-tree-0.96.0-hadoop2.jar
> >>> -rw-r--r-- 1 hadoop users 3134214 Oct 12 06:28
> >>> hbase-protocol-0.96.0-hadoop2.jar
> >>> -rw-r--r-- 1 hadoop users 3058804 Oct 12 06:28
> >>> hbase-server-0.96.0-hadoop2.jar
> >>> -rw-r--r-- 1 hadoop users 3150292 Oct 12 06:28
> >>> hbase-server-0.96.0-hadoop2-tests.jar
> >>> -rw-r--r-- 1 hadoop users 12554 Oct 12 06:28
> >>> hbase-shell-0.96.0-hadoop2.jar
> >>> -rw-r--r-- 1 hadoop users 10941 Oct 12 06:28
> >>> hbase-testing-util-0.96.0-hadoop2.jar
> >>> -rw-r--r-- 1 hadoop users 2276333 Oct 12 06:28
> >>> hbase-thrift-0.96.0-hadoop2.jar
> >>> -rw-r--r-- 1 hadoop users 95975 Sep 17 16:15
> high-scale-lib-1.1.1.jar
> >>> -rw-r--r-- 1 hadoop users 31020 Sep 17 16:14 htrace-core-2.01.jar
> >>> -rw-r--r-- 1 hadoop users 352585 Sep 17 16:15 httpclient-4.1.3.jar
> >>> -rw-r--r-- 1 hadoop users 181201 Sep 17 16:15 httpcore-4.1.3.jar
> >>> -rw-r--r-- 1 hadoop users 227517 Sep 17 16:13
> >> jackson-core-asl-1.8.8.jar
> >>> -rw-r--r-- 1 hadoop users 17884 Sep 17 16:13 jackson-jaxrs-1.8.8.jar
> >>> -rw-r--r-- 1 hadoop users 669065 Sep 17 16:13
> >>> jackson-mapper-asl-1.8.8.jar
> >>> -rw-r--r-- 1 hadoop users 32353 Sep 17 16:13 jackson-xc-1.8.8.jar
> >>> -rw-r--r-- 1 hadoop users 20642 Sep 17 16:15 jamon-runtime-2.3.1.jar
> >>> -rw-r--r-- 1 hadoop users 408133 Sep 17 16:13
> >> jasper-compiler-5.5.23.jar
> >>> -rw-r--r-- 1 hadoop users 76844 Sep 17 16:13
> jasper-runtime-5.5.23.jar
> >>> -rw-r--r-- 1 hadoop users 2497 Sep 17 23:29 javax.inject-1.jar
> >>> -rw-r--r-- 1 hadoop users 83586 Sep 17 23:29 javax.servlet-3.0.jar
> >>> -rw-r--r-- 1 hadoop users 105134 Sep 17 16:27 jaxb-api-2.2.2.jar
> >>> -rw-r--r-- 1 hadoop users 890168 Sep 17 16:13 jaxb-impl-2.2.3-1.jar
> >>> -rw-r--r-- 1 hadoop users 129217 Sep 17 23:29 jersey-client-1.8.jar
> >>> -rw-r--r-- 1 hadoop users 458233 Sep 17 16:13 jersey-core-1.8.jar
> >>> -rw-r--r-- 1 hadoop users 17585 Sep 17 23:29 jersey-grizzly2-1.8.jar
> >>> -rw-r--r-- 1 hadoop users 14712 Sep 17 23:29 jersey-guice-1.8.jar
> >>> -rw-r--r-- 1 hadoop users 147933 Sep 17 16:13 jersey-json-1.8.jar
> >>> -rw-r--r-- 1 hadoop users 694352 Sep 17 16:13 jersey-server-1.8.jar
> >>> -rw-r--r-- 1 hadoop users 28034 Sep 17 23:29
> >>> jersey-test-framework-core-1.8.jar
> >>> -rw-r--r-- 1 hadoop users 12907 Sep 17 23:29
> >>> jersey-test-framework-grizzly2-1.8.jar
> >>> -rw-r--r-- 1 hadoop users 321806 Sep 17 16:27 jets3t-0.6.1.jar
> >>> -rw-r--r-- 1 hadoop users 75963 Sep 17 16:13 jettison-1.3.1.jar
> >>> -rw-r--r-- 1 hadoop users 539912 Sep 17 16:13 jetty-6.1.26.jar
> >>> -rw-r--r-- 1 hadoop users 18891 Sep 17 16:15
> >> jetty-sslengine-6.1.26.jar
> >>> -rw-r--r-- 1 hadoop users 177131 Sep 17 16:13 jetty-util-6.1.26.jar
> >>> -rw-r--r-- 1 hadoop users 13832273 Sep 17 16:15
> jruby-complete-1.6.8.jar
> >>> -rw-r--r-- 1 hadoop users 185746 Sep 17 16:27 jsch-0.1.42.jar
> >>> -rw-r--r-- 1 hadoop users 1024680 Sep 17 16:13 jsp-2.1-6.1.14.jar
> >>> -rw-r--r-- 1 hadoop users 134910 Sep 17 16:13 jsp-api-2.1-6.1.14.jar
> >>> -rw-r--r-- 1 hadoop users 100636 Sep 17 16:27 jsp-api-2.1.jar
> >>> -rw-r--r-- 1 hadoop users 33015 Sep 17 16:13 jsr305-1.3.9.jar
> >>> -rw-r--r-- 1 hadoop users 245039 Sep 17 16:12 junit-4.11.jar
> >>> -rw-r--r-- 1 hadoop users 347531 Sep 17 16:15 libthrift-0.9.0.jar
> >>> -rw-r--r-- 1 hadoop users 489884 Sep 17 16:12 log4j-1.2.17.jar
> >>> -rw-r--r-- 1 hadoop users 42212 Sep 17 23:29
> >>> management-api-3.0.0-b012.jar
> >>> -rw-r--r-- 1 hadoop users 82445 Sep 17 16:14 metrics-core-2.1.2.jar
> >>> drwxr-xr-x 3 hadoop users 4096 Nov 21 09:10 native
> >>> -rw-r--r-- 1 hadoop users 1206119 Sep 18 04:00 netty-3.6.6.Final.jar
> >>> -rw-r--r-- 1 hadoop users 29555 Sep 17 16:27 paranamer-2.3.jar
> >>> -rw-r--r-- 1 hadoop users 533455 Sep 17 16:13 protobuf-java-2.5.0.jar
> >>> drwxr-xr-x 5 hadoop users 4096 Sep 28 10:37 ruby
> >>> -rw-r--r-- 1 hadoop users 132368 Sep 17 16:13
> >> servlet-api-2.5-6.1.14.jar
> >>> -rw-r--r-- 1 hadoop users 105112 Sep 17 16:12 servlet-api-2.5.jar
> >>> -rw-r--r-- 1 hadoop users 25962 Sep 17 16:14 slf4j-api-1.6.4.jar
> >>> -rw-r--r-- 1 hadoop users 9748 Oct 3 07:15 slf4j-log4j12-1.6.4.jar
> >>> -rw-r--r-- 1 hadoop users 995720 Sep 17 16:27 snappy-java-1.0.3.2.jar
> >>> -rw-r--r-- 1 hadoop users 26514 Sep 17 16:13 stax-api-1.0.1.jar
> >>> -rw-r--r-- 1 hadoop users 15010 Sep 17 16:13 xmlenc-0.52.jar
> >>> -rw-r--r-- 1 hadoop users 94672 Sep 17 16:27 xz-1.0.jar
> >>> -rw-r--r-- 1 hadoop users 779974 Sep 17 16:14 zookeeper-3.4.5.jar
> >>>
> >>> Best regards,
> >>> Henry
> >>>
> >>> -----Original Message-----
> >>> From: MA11 YTHung1
> >>> Sent: Thursday, November 21, 2013 9:51 AM
> >>> To: user@hbase.apache.org
> >>> Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC:
> >>> RPC.stopProxy called on non proxy.
> >>>
> >>> I'm using hadoop-2.2.0 stable
> >>>
> >>> -----Original Message-----
> >>> From: Jimmy Xiang [mailto:jxiang@cloudera.com]
> >>> Sent: Thursday, November 21, 2013 9:49 AM
> >>> To: user
> >>> Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC:
> >>> RPC.stopProxy called on non proxy.
> >>>
> >>> Which version of Hadoop do you use?
> >>>
> >>>
> >>> On Wed, Nov 20, 2013 at 5:43 PM, Henry Hung <YT...@winbond.com>
> wrote:
> >>>
> >>>> Hi All,
> >>>>
> >>>> When stopping master or regionserver, I found some ERROR and WARN
> >>>> in the log files, are these errors can cause problem in hbase:
> >>>>
> >>>> 13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
> >>>> 13/11/21 09:35:36 ERROR ipc.RPC: RPC.stopProxy called on non proxy.
> >>>> java.lang.IllegalArgumentException: object is not an instance of
> >>>> declaring class
> >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>>> at
> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl
> >>> .j
> >>> ava:39)
> >>>> at
> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcce
> >>> ss
> >>> orImpl.java:25)
> >>>> at java.lang.reflect.Method.invoke(Method.java:597)
> >>>> at
> >>>> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
> >>>> at $Proxy18.close(Unknown Source)
> >>>> at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:621)
> >>>> at
> >>> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient
> >>> .j
> >>> ava:738)
> >>>> at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> >>>> at
> >>> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSy
> >>> st
> >>> em.java:847)
> >>>> at
> >>>> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> >>>> at
> >>> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem
> >>> .j
> >>> ava:2541)
> >>>> at
> >>>> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManag
> >>>> er
> >>>> .j
> >>>> ava:54)
> >>>> 13/11/21 09:35:36 WARN util.ShutdownHookManager: ShutdownHook
> >>>> 'ClientFinalizer' failed,
> >>> org.apache.hadoop.HadoopIllegalArgumentException:
> >>>> Cannot close proxy - is not Closeable or does not provide
> >>>> closeable invocation handler class $Proxy18
> >>>> org.apache.hadoop.HadoopIllegalArgumentException: Cannot close
> >>>> proxy
> >>>> - is not Closeable or does not provide closeable invocation
> >>>> handler class
> >>>> $Proxy18
> >>>> at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:639)
> >>>> at
> >>> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient
> >>> .j
> >>> ava:738)
> >>>> at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> >>>> at
> >>> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSy
> >>> st
> >>> em.java:847)
> >>>> at
> >>>> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> >>>> at
> >>> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem
> >>> .j
> >>> ava:2541)
> >>>> at
> >>>> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManag
> >>>> er
> >>>> .j
> >>>> ava:54)
> >>>>
> >>>> Best regards,
> >>>> Henry
> >>>>
> >>>> ________________________________
> >>>> The privileged confidential information contained in this email is
> >>>> intended for use only by the addressees as indicated by the
> >>>> original sender of this email. If you are not the addressee
> >>>> indicated in this email or are not responsible for delivery of the
> >>>> email to such a person, please kindly reply to the sender
> >>>> indicating this fact and delete all copies of it from your
> >>>> computer and network server immediately. Your cooperation is
> >>>> highly appreciated. It is advised that any unauthorized use of
> >>>> confidential information of Winbond is strictly prohibited; and
> >>>> any information in this email irrelevant to the official business
> >>>> of Winbond shall be deemed as neither given nor
> >>> endorsed by Winbond.
> >>>
> >>> The privileged confidential information contained in this email is
> >>> intended for use only by the addressees as indicated by the original
> >>> sender of this email. If you are not the addressee indicated in this
> >>> email or are not responsible for delivery of the email to such a
> >>> person, please kindly reply to the sender indicating this fact and
> >>> delete all copies of it from your computer and network server
> >>> immediately. Your cooperation is highly appreciated. It is advised
> >>> that any unauthorized use of confidential information of Winbond is
> >>> strictly prohibited; and any information in this email irrelevant to
> >>> the official business of Winbond shall be deemed as neither given
> >>> nor
> >> endorsed by Winbond.
> >>>
> >>> The privileged confidential information contained in this email is
> >>> intended for use only by the addressees as indicated by the original
> >>> sender of this email. If you are not the addressee indicated in this
> >>> email or are not responsible for delivery of the email to such a
> >>> person, please kindly reply to the sender indicating this fact and
> >>> delete all copies of it from your computer and network server
> >>> immediately. Your cooperation is highly appreciated. It is advised
> >>> that any unauthorized use of confidential information of Winbond is
> >>> strictly prohibited; and any information in this email irrelevant to
> >>> the official business of Winbond shall be deemed as neither given
> >>> nor
> >> endorsed by Winbond.
> >>>
> >>> The privileged confidential information contained in this email is
> >>> intended for use only by the addressees as indicated by the original
> >>> sender of this email. If you are not the addressee indicated in this
> >>> email or are not responsible for delivery of the email to such a
> >>> person, please kindly reply to the sender indicating this fact and
> >>> delete all copies of it from your computer and network server
> >>> immediately. Your cooperation is highly appreciated. It is advised
> >>> that any unauthorized use of confidential information of Winbond is
> >>> strictly prohibited; and any information in this email irrelevant to
> >>> the official business of Winbond shall be deemed as neither given
> >>> nor
> >> endorsed by Winbond.
> >>
> >> The privileged confidential information contained in this email is
> >> intended for use only by the addressees as indicated by the original
> >> sender of this email. If you are not the addressee indicated in this
> >> email or are not responsible for delivery of the email to such a
> >> person, please kindly reply to the sender indicating this fact and
> >> delete all copies of it from your computer and network server
> >> immediately. Your cooperation is highly appreciated. It is advised
> >> that any unauthorized use of confidential information of Winbond is
> >> strictly prohibited; and any information in this email irrelevant to
> >> the official business of Winbond shall be deemed as neither given nor
> endorsed by Winbond.
> >
> > The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original sender
> of this email. If you are not the addressee indicated in this email or are
> not responsible for delivery of the email to such a person, please kindly
> reply to the sender indicating this fact and delete all copies of it from
> your computer and network server immediately. Your cooperation is highly
> appreciated. It is advised that any unauthorized use of confidential
> information of Winbond is strictly prohibited; and any information in this
> email irrelevant to the official business of Winbond shall be deemed as
> neither given nor endorsed by Winbond.
>
Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.
Posted by Ted Yu <yu...@gmail.com>.
Update:
Henry tried my patch attached to HBASE-10029
From master log, it seems my patch worked.
I will get back to this thread after further testing / code review.
Cheers
On Nov 25, 2013, at 6:05 PM, Henry Hung <YT...@winbond.com> wrote:
> @Ted:
>
> I create the JIRA, is the information sufficient?
> https://issues.apache.org/jira/browse/HBASE-10029
>
> Best regards,
> Henry
>
> -----Original Message-----
> From: Ted Yu [mailto:yuzhihong@gmail.com]
> Sent: Tuesday, November 26, 2013 9:30 AM
> To: user@hbase.apache.org
> Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.
>
> Henry:
> Thanks for the additional information.
>
> Looks like HA namenode with QJM is not covered by current code.
>
> Mind filing a JIRA with summary of this thread ?
>
> Cheers
>
>
> On Tue, Nov 26, 2013 at 9:12 AM, Henry Hung <YT...@winbond.com> wrote:
>
>> @Ted
>> Yes, I use the hadoop-hdfs-2.2.0.jar.
>>
>> BTW, how do you certain that the namenode class is
>> ClientNamenodeProtocolTranslatorPB?
>>
>> From the NameNodeProxies, I can only assume the
>> ClientNamenodeProtocolTranslatorPB is used only when connecting to
>> single hadoop namenode.
>>
>> public static <T> ProxyAndInfo<T> createNonHAProxy(
>> Configuration conf, InetSocketAddress nnAddr, Class<T> xface,
>> UserGroupInformation ugi, boolean withRetries) throws IOException {
>> Text dtService = SecurityUtil.buildTokenService(nnAddr);
>>
>> T proxy;
>> if (xface == ClientProtocol.class) {
>> proxy = (T) createNNProxyWithClientProtocol(nnAddr, conf, ugi,
>> withRetries);
>>
>>
>> But I'm using HA configuration using QJM, so the my guess is the
>> createProxy will go to the HA case because I provide
>> failoverProxyProviderClass with
>> "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider".
>>
>> public static <T> ProxyAndInfo<T> createProxy(Configuration conf,
>> URI nameNodeUri, Class<T> xface) throws IOException {
>> Class<FailoverProxyProvider<T>> failoverProxyProviderClass =
>> getFailoverProxyProviderClass(conf, nameNodeUri, xface);
>>
>> if (failoverProxyProviderClass == null) {
>> // Non-HA case
>> return createNonHAProxy(conf, NameNode.getAddress(nameNodeUri),
>> xface,
>> UserGroupInformation.getCurrentUser(), true);
>> } else {
>> // HA case
>> FailoverProxyProvider<T> failoverProxyProvider = NameNodeProxies
>> .createFailoverProxyProvider(conf,
>> failoverProxyProviderClass, xface,
>> nameNodeUri);
>> Conf config = new Conf(conf);
>> T proxy = (T) RetryProxy.create(xface, failoverProxyProvider,
>> RetryPolicies
>> .failoverOnNetworkException(RetryPolicies.TRY_ONCE_THEN_FAIL,
>> config.maxFailoverAttempts, config.failoverSleepBaseMillis,
>> config.failoverSleepMaxMillis));
>>
>> Text dtService = HAUtil.buildTokenServiceForLogicalUri(nameNodeUri);
>> return new ProxyAndInfo<T>(proxy, dtService);
>> }
>> }
>>
>> Here is the snippet of my hdfs-site.xml:
>>
>> <property>
>> <name>dfs.nameservices</name>
>> <value>hadoopdev</value>
>> </property>
>> <property>
>> <name>dfs.ha.namenodes.hadoopdev</name>
>> <value>nn1,nn2</value>
>> </property>
>> <property>
>> <name>dfs.namenode.rpc-address.hadoopdev.nn1</name>
>> <value>fphd9.ctpilot1.com:9000</value>
>> </property>
>> <property>
>> <name>dfs.namenode.http-address.hadoopdev.nn1</name>
>> <value>fphd9.ctpilot1.com:50070</value>
>> </property>
>> <property>
>> <name>dfs.namenode.rpc-address.hadoopdev.nn2</name>
>> <value>fphd10.ctpilot1.com:9000</value>
>> </property>
>> <property>
>> <name>dfs.namenode.http-address.hadoopdev.nn2</name>
>> <value>fphd10.ctpilot1.com:50070</value>
>> </property>
>> <property>
>> <name>dfs.namenode.shared.edits.dir</name>
>> <value>qjournal://fphd8.ctpilot1.com:8485;fphd9.ctpilot1.com:8485;
>> fphd10.ctpilot1.com:8485/hadoopdev</value>
>> </property>
>> <property>
>> <name>dfs.client.failover.proxy.provider.hadoopdev</name>
>>
>> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
>> </property>
>> <property>
>> <name>dfs.ha.fencing.methods</name>
>> <value>shell(/bin/true)</value>
>> </property>
>> <property>
>> <name>dfs.journalnode.edits.dir</name>
>> <value>/data/hadoop/hadoop-data-2/journal</value>
>> </property>
>> <property>
>> <name>dfs.ha.automatic-failover.enabled</name>
>> <value>true</value>
>> </property>
>> <property>
>> <name>ha.zookeeper.quorum</name>
>> <value>fphd1.ctpilot1.com:2222</value>
>> </property>
>>
>> -----Original Message-----
>> From: Ted Yu [mailto:yuzhihong@gmail.com]
>> Sent: Tuesday, November 26, 2013 1:56 AM
>> To: user@hbase.apache.org
>> Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC:
>> RPC.stopProxy called on non proxy.
>>
>> Here is the caller to createReorderingProxy():
>>
>> ClientProtocol cp1 = createReorderingProxy(namenode, lrb, conf);
>>
>> where namenode
>> is org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB :
>>
>> public class ClientNamenodeProtocolTranslatorPB implements
>>
>> ProtocolMetaInterface, ClientProtocol, Closeable,
>> ProtocolTranslator {
>>
>> In createReorderingProxy() :
>>
>> new Class[]{ClientProtocol.class, Closeable.class},
>>
>> We ask for Closeable interface.
>>
>>
>> Did the error persist after you replaced with the hadoop-hdfs-2.2.0.jar ?
>> Meaning, did you start HBase using the new hadoop jars ?
>>
>> Cheers
>>
>>
>> On Mon, Nov 25, 2013 at 1:04 PM, Henry Hung <YT...@winbond.com> wrote:
>>
>>> I looked into the source code of
>>> org/apache/hadoop/hbase/fs/HFileSystem.java
>>> and whenever I execute hbase-daemon.sh stop master (or
>>> regionserver), the
>>> method.getName() is "close",
>>> but the org/apache/hadoop/hdfs/protocol/ClientProtocol.java do not
>>> have method named "close", thus it result in error "object is not an
>>> instance of declaring class"
>>>
>>> Could someone that familiar with hbase-0.96.0 hadoop2 can tell me if
>>> this problem need to be fixed? And how to fix it?
>>>
>>> private static ClientProtocol createReorderingProxy(final
>>> ClientProtocol cp, final ReorderBlocks lrb, final Configuration conf) {
>>> return (ClientProtocol) Proxy.newProxyInstance
>>> (cp.getClass().getClassLoader(),
>>> new Class[]{ClientProtocol.class, Closeable.class},
>>> new InvocationHandler() {
>>> public Object invoke(Object proxy, Method method,
>>> Object[] args) throws Throwable {
>>> try {
>>> // method.invoke will failed if
>>> method.getName().equals("close")
>>> // because ClientProtocol do not have method "close"
>>> Object res = method.invoke(cp, args);
>>> if (res != null && args != null && args.length == 3
>>> && "getBlockLocations".equals(method.getName())
>>> && res instanceof LocatedBlocks
>>> && args[0] instanceof String
>>> && args[0] != null) {
>>> lrb.reorderBlocks(conf, (LocatedBlocks) res,
>>> (String) args[0]);
>>> }
>>> return res;
>>> } catch (InvocationTargetException ite) {
>>> // We will have this for all the exception,
>>> checked on not, sent
>>> // by any layer, including the functional exception
>>> Throwable cause = ite.getCause();
>>> if (cause == null){
>>> throw new RuntimeException(
>>> "Proxy invocation failed and getCause is
>>> null",
>> ite);
>>> }
>>> if (cause instanceof UndeclaredThrowableException) {
>>> Throwable causeCause = cause.getCause();
>>> if (causeCause == null) {
>>> throw new
>>> RuntimeException("UndeclaredThrowableException had null cause!");
>>> }
>>> cause = cause.getCause();
>>> }
>>> throw cause;
>>> }
>>> }
>>> });
>>> }
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: MA11 YTHung1
>>> Sent: Thursday, November 21, 2013 9:57 AM
>>> To: user@hbase.apache.org
>>> Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC:
>>> RPC.stopProxy called on non proxy.
>>>
>>> Additional information:
>>>
>>> I replace all files with prefix hadoop in hbase-0.96.0-hadoop2/lib
>>> with
>>> hadoop-2.2.0 libraries.
>>>
>>> the ls -l of hbase-0.96.0-hadoop2/lib as below:
>>>
>>> -rw-r--r-- 1 hadoop users 62983 Sep 17 16:13 activation-1.1.jar
>>> -rw-r--r-- 1 hadoop users 4467 Sep 17 23:29 aopalliance-1.0.jar
>>> -rw-r--r-- 1 hadoop users 43033 Sep 17 16:13 asm-3.1.jar
>>> -rw-r--r-- 1 hadoop users 263268 Sep 17 16:27 avro-1.5.3.jar
>>> -rw-r--r-- 1 hadoop users 188671 Sep 17 16:12
>> commons-beanutils-1.7.0.jar
>>> -rw-r--r-- 1 hadoop users 206035 Sep 17 16:13
>>> commons-beanutils-core-1.8.0.jar
>>> -rw-r--r-- 1 hadoop users 41123 Sep 17 16:12 commons-cli-1.2.jar
>>> -rw-r--r-- 1 hadoop users 259600 Sep 17 16:13 commons-codec-1.7.jar
>>> -rw-r--r-- 1 hadoop users 575389 Sep 17 16:12
>>> commons-collections-3.2.1.jar
>>> -rw-r--r-- 1 hadoop users 238681 Sep 17 16:27 commons-compress-1.4.jar
>>> -rw-r--r-- 1 hadoop users 298829 Sep 17 16:13
>>> commons-configuration-1.6.jar
>>> -rw-r--r-- 1 hadoop users 24239 Sep 17 23:28 commons-daemon-1.0.13.jar
>>> -rw-r--r-- 1 hadoop users 143602 Sep 17 16:12 commons-digester-1.8.jar
>>> -rw-r--r-- 1 hadoop users 112341 Sep 17 16:13 commons-el-1.0.jar
>>> -rw-r--r-- 1 hadoop users 305001 Sep 17 16:12
>> commons-httpclient-3.1.jar
>>> -rw-r--r-- 1 hadoop users 185140 Sep 17 16:13 commons-io-2.4.jar
>>> -rw-r--r-- 1 hadoop users 284220 Sep 17 16:12 commons-lang-2.6.jar
>>> -rw-r--r-- 1 hadoop users 60686 Sep 17 16:12 commons-logging-1.1.1.jar
>>> -rw-r--r-- 1 hadoop users 988514 Sep 17 16:13 commons-math-2.2.jar
>>> -rw-r--r-- 1 hadoop users 273370 Sep 17 16:27 commons-net-3.1.jar
>>> -rw-r--r-- 1 hadoop users 3566844 Sep 17 16:15 core-3.1.1.jar
>>> -rw-r--r-- 1 hadoop users 15322 Sep 17 16:12
>>> findbugs-annotations-1.3.9-1.jar
>>> -rw-r--r-- 1 hadoop users 21817 Sep 17 23:29
>>> gmbal-api-only-3.0.0-b023.jar
>>> -rw-r--r-- 1 hadoop users 684337 Sep 17 23:29
>> grizzly-framework-2.1.1.jar
>>> -rw-r--r-- 1 hadoop users 210846 Sep 17 23:29
>>> grizzly-framework-2.1.1-tests.jar
>>> -rw-r--r-- 1 hadoop users 248346 Sep 17 23:29 grizzly-http-2.1.1.jar
>>> -rw-r--r-- 1 hadoop users 193583 Sep 17 23:29
>>> grizzly-http-server-2.1.1.jar
>>> -rw-r--r-- 1 hadoop users 336878 Sep 17 23:29
>>> grizzly-http-servlet-2.1.1.jar
>>> -rw-r--r-- 1 hadoop users 8072 Sep 17 23:29 grizzly-rcm-2.1.1.jar
>>> -rw-r--r-- 1 hadoop users 1795932 Sep 17 16:13 guava-12.0.1.jar
>>> -rw-r--r-- 1 hadoop users 710492 Sep 17 23:29 guice-3.0.jar
>>> -rw-r--r-- 1 hadoop users 65012 Sep 17 23:29 guice-servlet-3.0.jar
>>> -rw-r--r-- 1 hadoop users 16778 Nov 20 17:39
>>> hadoop-annotations-2.2.0.jar
>>> -rw-r--r-- 1 hadoop users 49750 Nov 20 17:40 hadoop-auth-2.2.0.jar
>>> -rw-r--r-- 1 hadoop users 2576 Oct 12 06:20
>>> hadoop-client-2.1.0-beta.jar
>>> -rw-r--r-- 1 hadoop users 2735584 Nov 20 17:50
>>> hadoop-common-2.2.0.jar
>>> -rw-r--r-- 1 hadoop users 5242252 Nov 21 08:48
>>> hadoop-hdfs-2.2.0.jar
>>> -rw-r--r-- 1 hadoop users 1988460 Nov 21 08:48
>> hadoop-hdfs-2.2.0-tests.jar
>>> -rw-r--r-- 1 hadoop users 482042 Nov 21 08:49
>>> hadoop-mapreduce-client-app-2.2.0.jar
>>> -rw-r--r-- 1 hadoop users 656365 Nov 21 08:49
>>> hadoop-mapreduce-client-common-2.2.0.jar
>>> -rw-r--r-- 1 hadoop users 1455001 Nov 21 08:50
>>> hadoop-mapreduce-client-core-2.2.0.jar
>>> -rw-r--r-- 1 hadoop users 35216 Nov 21 08:50
>>> hadoop-mapreduce-client-jobclient-2.2.0.jar
>>> -rw-r--r-- 1 hadoop users 1434852 Nov 21 08:50
>>> hadoop-mapreduce-client-jobclient-2.2.0-tests.jar
>>> -rw-r--r-- 1 hadoop users 21537 Nov 21 08:51
>>> hadoop-mapreduce-client-shuffle-2.2.0.jar
>>> -rw-r--r-- 1 hadoop users 1158936 Nov 21 08:51 hadoop-yarn-api-2.2.0.jar
>>> -rw-r--r-- 1 hadoop users 94728 Nov 21 08:51
>>> hadoop-yarn-client-2.2.0.jar
>>> -rw-r--r-- 1 hadoop users 1301627 Nov 21 08:51
>>> hadoop-yarn-common-2.2.0.jar
>>> -rw-r--r-- 1 hadoop users 175554 Nov 21 08:52
>>> hadoop-yarn-server-common-2.2.0.jar
>>> -rw-r--r-- 1 hadoop users 467638 Nov 21 08:52
>>> hadoop-yarn-server-nodemanager-2.2.0.jar
>>> -rw-r--r-- 1 hadoop users 825853 Oct 12 06:28
>>> hbase-client-0.96.0-hadoop2.jar
>>> -rw-r--r-- 1 hadoop users 354845 Oct 12 06:28
>>> hbase-common-0.96.0-hadoop2.jar
>>> -rw-r--r-- 1 hadoop users 132690 Oct 12 06:28
>>> hbase-common-0.96.0-hadoop2-tests.jar
>>> -rw-r--r-- 1 hadoop users 97428 Oct 12 06:28
>>> hbase-examples-0.96.0-hadoop2.jar
>>> -rw-r--r-- 1 hadoop users 72765 Oct 12 06:28
>>> hbase-hadoop2-compat-0.96.0-hadoop2.jar
>>> -rw-r--r-- 1 hadoop users 32096 Oct 12 06:28
>>> hbase-hadoop-compat-0.96.0-hadoop2.jar
>>> -rw-r--r-- 1 hadoop users 12174 Oct 12 06:28
>> hbase-it-0.96.0-hadoop2.jar
>>> -rw-r--r-- 1 hadoop users 288784 Oct 12 06:28
>>> hbase-it-0.96.0-hadoop2-tests.jar
>>> -rw-r--r-- 1 hadoop users 94784 Oct 12 06:28
>>> hbase-prefix-tree-0.96.0-hadoop2.jar
>>> -rw-r--r-- 1 hadoop users 3134214 Oct 12 06:28
>>> hbase-protocol-0.96.0-hadoop2.jar
>>> -rw-r--r-- 1 hadoop users 3058804 Oct 12 06:28
>>> hbase-server-0.96.0-hadoop2.jar
>>> -rw-r--r-- 1 hadoop users 3150292 Oct 12 06:28
>>> hbase-server-0.96.0-hadoop2-tests.jar
>>> -rw-r--r-- 1 hadoop users 12554 Oct 12 06:28
>>> hbase-shell-0.96.0-hadoop2.jar
>>> -rw-r--r-- 1 hadoop users 10941 Oct 12 06:28
>>> hbase-testing-util-0.96.0-hadoop2.jar
>>> -rw-r--r-- 1 hadoop users 2276333 Oct 12 06:28
>>> hbase-thrift-0.96.0-hadoop2.jar
>>> -rw-r--r-- 1 hadoop users 95975 Sep 17 16:15 high-scale-lib-1.1.1.jar
>>> -rw-r--r-- 1 hadoop users 31020 Sep 17 16:14 htrace-core-2.01.jar
>>> -rw-r--r-- 1 hadoop users 352585 Sep 17 16:15 httpclient-4.1.3.jar
>>> -rw-r--r-- 1 hadoop users 181201 Sep 17 16:15 httpcore-4.1.3.jar
>>> -rw-r--r-- 1 hadoop users 227517 Sep 17 16:13
>> jackson-core-asl-1.8.8.jar
>>> -rw-r--r-- 1 hadoop users 17884 Sep 17 16:13 jackson-jaxrs-1.8.8.jar
>>> -rw-r--r-- 1 hadoop users 669065 Sep 17 16:13
>>> jackson-mapper-asl-1.8.8.jar
>>> -rw-r--r-- 1 hadoop users 32353 Sep 17 16:13 jackson-xc-1.8.8.jar
>>> -rw-r--r-- 1 hadoop users 20642 Sep 17 16:15 jamon-runtime-2.3.1.jar
>>> -rw-r--r-- 1 hadoop users 408133 Sep 17 16:13
>> jasper-compiler-5.5.23.jar
>>> -rw-r--r-- 1 hadoop users 76844 Sep 17 16:13 jasper-runtime-5.5.23.jar
>>> -rw-r--r-- 1 hadoop users 2497 Sep 17 23:29 javax.inject-1.jar
>>> -rw-r--r-- 1 hadoop users 83586 Sep 17 23:29 javax.servlet-3.0.jar
>>> -rw-r--r-- 1 hadoop users 105134 Sep 17 16:27 jaxb-api-2.2.2.jar
>>> -rw-r--r-- 1 hadoop users 890168 Sep 17 16:13 jaxb-impl-2.2.3-1.jar
>>> -rw-r--r-- 1 hadoop users 129217 Sep 17 23:29 jersey-client-1.8.jar
>>> -rw-r--r-- 1 hadoop users 458233 Sep 17 16:13 jersey-core-1.8.jar
>>> -rw-r--r-- 1 hadoop users 17585 Sep 17 23:29 jersey-grizzly2-1.8.jar
>>> -rw-r--r-- 1 hadoop users 14712 Sep 17 23:29 jersey-guice-1.8.jar
>>> -rw-r--r-- 1 hadoop users 147933 Sep 17 16:13 jersey-json-1.8.jar
>>> -rw-r--r-- 1 hadoop users 694352 Sep 17 16:13 jersey-server-1.8.jar
>>> -rw-r--r-- 1 hadoop users 28034 Sep 17 23:29
>>> jersey-test-framework-core-1.8.jar
>>> -rw-r--r-- 1 hadoop users 12907 Sep 17 23:29
>>> jersey-test-framework-grizzly2-1.8.jar
>>> -rw-r--r-- 1 hadoop users 321806 Sep 17 16:27 jets3t-0.6.1.jar
>>> -rw-r--r-- 1 hadoop users 75963 Sep 17 16:13 jettison-1.3.1.jar
>>> -rw-r--r-- 1 hadoop users 539912 Sep 17 16:13 jetty-6.1.26.jar
>>> -rw-r--r-- 1 hadoop users 18891 Sep 17 16:15
>> jetty-sslengine-6.1.26.jar
>>> -rw-r--r-- 1 hadoop users 177131 Sep 17 16:13 jetty-util-6.1.26.jar
>>> -rw-r--r-- 1 hadoop users 13832273 Sep 17 16:15 jruby-complete-1.6.8.jar
>>> -rw-r--r-- 1 hadoop users 185746 Sep 17 16:27 jsch-0.1.42.jar
>>> -rw-r--r-- 1 hadoop users 1024680 Sep 17 16:13 jsp-2.1-6.1.14.jar
>>> -rw-r--r-- 1 hadoop users 134910 Sep 17 16:13 jsp-api-2.1-6.1.14.jar
>>> -rw-r--r-- 1 hadoop users 100636 Sep 17 16:27 jsp-api-2.1.jar
>>> -rw-r--r-- 1 hadoop users 33015 Sep 17 16:13 jsr305-1.3.9.jar
>>> -rw-r--r-- 1 hadoop users 245039 Sep 17 16:12 junit-4.11.jar
>>> -rw-r--r-- 1 hadoop users 347531 Sep 17 16:15 libthrift-0.9.0.jar
>>> -rw-r--r-- 1 hadoop users 489884 Sep 17 16:12 log4j-1.2.17.jar
>>> -rw-r--r-- 1 hadoop users 42212 Sep 17 23:29
>>> management-api-3.0.0-b012.jar
>>> -rw-r--r-- 1 hadoop users 82445 Sep 17 16:14 metrics-core-2.1.2.jar
>>> drwxr-xr-x 3 hadoop users 4096 Nov 21 09:10 native
>>> -rw-r--r-- 1 hadoop users 1206119 Sep 18 04:00 netty-3.6.6.Final.jar
>>> -rw-r--r-- 1 hadoop users 29555 Sep 17 16:27 paranamer-2.3.jar
>>> -rw-r--r-- 1 hadoop users 533455 Sep 17 16:13 protobuf-java-2.5.0.jar
>>> drwxr-xr-x 5 hadoop users 4096 Sep 28 10:37 ruby
>>> -rw-r--r-- 1 hadoop users 132368 Sep 17 16:13
>> servlet-api-2.5-6.1.14.jar
>>> -rw-r--r-- 1 hadoop users 105112 Sep 17 16:12 servlet-api-2.5.jar
>>> -rw-r--r-- 1 hadoop users 25962 Sep 17 16:14 slf4j-api-1.6.4.jar
>>> -rw-r--r-- 1 hadoop users 9748 Oct 3 07:15 slf4j-log4j12-1.6.4.jar
>>> -rw-r--r-- 1 hadoop users 995720 Sep 17 16:27 snappy-java-1.0.3.2.jar
>>> -rw-r--r-- 1 hadoop users 26514 Sep 17 16:13 stax-api-1.0.1.jar
>>> -rw-r--r-- 1 hadoop users 15010 Sep 17 16:13 xmlenc-0.52.jar
>>> -rw-r--r-- 1 hadoop users 94672 Sep 17 16:27 xz-1.0.jar
>>> -rw-r--r-- 1 hadoop users 779974 Sep 17 16:14 zookeeper-3.4.5.jar
>>>
>>> Best regards,
>>> Henry
>>>
>>> -----Original Message-----
>>> From: MA11 YTHung1
>>> Sent: Thursday, November 21, 2013 9:51 AM
>>> To: user@hbase.apache.org
>>> Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC:
>>> RPC.stopProxy called on non proxy.
>>>
>>> I'm using hadoop-2.2.0 stable
>>>
>>> -----Original Message-----
>>> From: Jimmy Xiang [mailto:jxiang@cloudera.com]
>>> Sent: Thursday, November 21, 2013 9:49 AM
>>> To: user
>>> Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC:
>>> RPC.stopProxy called on non proxy.
>>>
>>> Which version of Hadoop do you use?
>>>
>>>
>>> On Wed, Nov 20, 2013 at 5:43 PM, Henry Hung <YT...@winbond.com> wrote:
>>>
>>>> Hi All,
>>>>
>>>> When stopping master or regionserver, I found some ERROR and WARN
>>>> in the log files, are these errors can cause problem in hbase:
>>>>
>>>> 13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
>>>> 13/11/21 09:35:36 ERROR ipc.RPC: RPC.stopProxy called on non proxy.
>>>> java.lang.IllegalArgumentException: object is not an instance of
>>>> declaring class
>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl
>>> .j
>>> ava:39)
>>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcce
>>> ss
>>> orImpl.java:25)
>>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>>> at
>>>> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
>>>> at $Proxy18.close(Unknown Source)
>>>> at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:621)
>>>> at
>>> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient
>>> .j
>>> ava:738)
>>>> at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
>>>> at
>>> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSy
>>> st
>>> em.java:847)
>>>> at
>>>> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
>>>> at
>>> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem
>>> .j
>>> ava:2541)
>>>> at
>>>> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManag
>>>> er
>>>> .j
>>>> ava:54)
>>>> 13/11/21 09:35:36 WARN util.ShutdownHookManager: ShutdownHook
>>>> 'ClientFinalizer' failed,
>>> org.apache.hadoop.HadoopIllegalArgumentException:
>>>> Cannot close proxy - is not Closeable or does not provide
>>>> closeable invocation handler class $Proxy18
>>>> org.apache.hadoop.HadoopIllegalArgumentException: Cannot close
>>>> proxy
>>>> - is not Closeable or does not provide closeable invocation
>>>> handler class
>>>> $Proxy18
>>>> at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:639)
>>>> at
>>> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient
>>> .j
>>> ava:738)
>>>> at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
>>>> at
>>> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSy
>>> st
>>> em.java:847)
>>>> at
>>>> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
>>>> at
>>> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem
>>> .j
>>> ava:2541)
>>>> at
>>>> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManag
>>>> er
>>>> .j
>>>> ava:54)
>>>>
>>>> Best regards,
>>>> Henry
>>>>
>>>> ________________________________
>>>> The privileged confidential information contained in this email is
>>>> intended for use only by the addressees as indicated by the
>>>> original sender of this email. If you are not the addressee
>>>> indicated in this email or are not responsible for delivery of the
>>>> email to such a person, please kindly reply to the sender
>>>> indicating this fact and delete all copies of it from your
>>>> computer and network server immediately. Your cooperation is
>>>> highly appreciated. It is advised that any unauthorized use of
>>>> confidential information of Winbond is strictly prohibited; and
>>>> any information in this email irrelevant to the official business
>>>> of Winbond shall be deemed as neither given nor
>>> endorsed by Winbond.
>>>
>>> The privileged confidential information contained in this email is
>>> intended for use only by the addressees as indicated by the original
>>> sender of this email. If you are not the addressee indicated in this
>>> email or are not responsible for delivery of the email to such a
>>> person, please kindly reply to the sender indicating this fact and
>>> delete all copies of it from your computer and network server
>>> immediately. Your cooperation is highly appreciated. It is advised
>>> that any unauthorized use of confidential information of Winbond is
>>> strictly prohibited; and any information in this email irrelevant to
>>> the official business of Winbond shall be deemed as neither given
>>> nor
>> endorsed by Winbond.
>>>
>>> The privileged confidential information contained in this email is
>>> intended for use only by the addressees as indicated by the original
>>> sender of this email. If you are not the addressee indicated in this
>>> email or are not responsible for delivery of the email to such a
>>> person, please kindly reply to the sender indicating this fact and
>>> delete all copies of it from your computer and network server
>>> immediately. Your cooperation is highly appreciated. It is advised
>>> that any unauthorized use of confidential information of Winbond is
>>> strictly prohibited; and any information in this email irrelevant to
>>> the official business of Winbond shall be deemed as neither given
>>> nor
>> endorsed by Winbond.
>>>
>>> The privileged confidential information contained in this email is
>>> intended for use only by the addressees as indicated by the original
>>> sender of this email. If you are not the addressee indicated in this
>>> email or are not responsible for delivery of the email to such a
>>> person, please kindly reply to the sender indicating this fact and
>>> delete all copies of it from your computer and network server
>>> immediately. Your cooperation is highly appreciated. It is advised
>>> that any unauthorized use of confidential information of Winbond is
>>> strictly prohibited; and any information in this email irrelevant to
>>> the official business of Winbond shall be deemed as neither given
>>> nor
>> endorsed by Winbond.
>>
>> The privileged confidential information contained in this email is
>> intended for use only by the addressees as indicated by the original
>> sender of this email. If you are not the addressee indicated in this
>> email or are not responsible for delivery of the email to such a
>> person, please kindly reply to the sender indicating this fact and
>> delete all copies of it from your computer and network server
>> immediately. Your cooperation is highly appreciated. It is advised
>> that any unauthorized use of confidential information of Winbond is
>> strictly prohibited; and any information in this email irrelevant to
>> the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
>
> The privileged confidential information contained in this email is intended for use only by the addressees as indicated by the original sender of this email. If you are not the addressee indicated in this email or are not responsible for delivery of the email to such a person, please kindly reply to the sender indicating this fact and delete all copies of it from your computer and network server immediately. Your cooperation is highly appreciated. It is advised that any unauthorized use of confidential information of Winbond is strictly prohibited; and any information in this email irrelevant to the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
RE: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
called on non proxy.
Posted by Henry Hung <YT...@winbond.com>.
@Ted:
I create the JIRA, is the information sufficient?
https://issues.apache.org/jira/browse/HBASE-10029
Best regards,
Henry
-----Original Message-----
From: Ted Yu [mailto:yuzhihong@gmail.com]
Sent: Tuesday, November 26, 2013 9:30 AM
To: user@hbase.apache.org
Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.
Henry:
Thanks for the additional information.
Looks like HA namenode with QJM is not covered by current code.
Mind filing a JIRA with summary of this thread ?
Cheers
On Tue, Nov 26, 2013 at 9:12 AM, Henry Hung <YT...@winbond.com> wrote:
> @Ted
> Yes, I use the hadoop-hdfs-2.2.0.jar.
>
> BTW, how do you certain that the namenode class is
> ClientNamenodeProtocolTranslatorPB?
>
> From the NameNodeProxies, I can only assume the
> ClientNamenodeProtocolTranslatorPB is used only when connecting to
> single hadoop namenode.
>
> public static <T> ProxyAndInfo<T> createNonHAProxy(
> Configuration conf, InetSocketAddress nnAddr, Class<T> xface,
> UserGroupInformation ugi, boolean withRetries) throws IOException {
> Text dtService = SecurityUtil.buildTokenService(nnAddr);
>
> T proxy;
> if (xface == ClientProtocol.class) {
> proxy = (T) createNNProxyWithClientProtocol(nnAddr, conf, ugi,
> withRetries);
>
>
> But I'm using HA configuration using QJM, so the my guess is the
> createProxy will go to the HA case because I provide
> failoverProxyProviderClass with
> "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider".
>
> public static <T> ProxyAndInfo<T> createProxy(Configuration conf,
> URI nameNodeUri, Class<T> xface) throws IOException {
> Class<FailoverProxyProvider<T>> failoverProxyProviderClass =
> getFailoverProxyProviderClass(conf, nameNodeUri, xface);
>
> if (failoverProxyProviderClass == null) {
> // Non-HA case
> return createNonHAProxy(conf, NameNode.getAddress(nameNodeUri),
> xface,
> UserGroupInformation.getCurrentUser(), true);
> } else {
> // HA case
> FailoverProxyProvider<T> failoverProxyProvider = NameNodeProxies
> .createFailoverProxyProvider(conf,
> failoverProxyProviderClass, xface,
> nameNodeUri);
> Conf config = new Conf(conf);
> T proxy = (T) RetryProxy.create(xface, failoverProxyProvider,
> RetryPolicies
> .failoverOnNetworkException(RetryPolicies.TRY_ONCE_THEN_FAIL,
> config.maxFailoverAttempts, config.failoverSleepBaseMillis,
> config.failoverSleepMaxMillis));
>
> Text dtService = HAUtil.buildTokenServiceForLogicalUri(nameNodeUri);
> return new ProxyAndInfo<T>(proxy, dtService);
> }
> }
>
> Here is the snippet of my hdfs-site.xml:
>
> <property>
> <name>dfs.nameservices</name>
> <value>hadoopdev</value>
> </property>
> <property>
> <name>dfs.ha.namenodes.hadoopdev</name>
> <value>nn1,nn2</value>
> </property>
> <property>
> <name>dfs.namenode.rpc-address.hadoopdev.nn1</name>
> <value>fphd9.ctpilot1.com:9000</value>
> </property>
> <property>
> <name>dfs.namenode.http-address.hadoopdev.nn1</name>
> <value>fphd9.ctpilot1.com:50070</value>
> </property>
> <property>
> <name>dfs.namenode.rpc-address.hadoopdev.nn2</name>
> <value>fphd10.ctpilot1.com:9000</value>
> </property>
> <property>
> <name>dfs.namenode.http-address.hadoopdev.nn2</name>
> <value>fphd10.ctpilot1.com:50070</value>
> </property>
> <property>
> <name>dfs.namenode.shared.edits.dir</name>
> <value>qjournal://fphd8.ctpilot1.com:8485;fphd9.ctpilot1.com:8485;
> fphd10.ctpilot1.com:8485/hadoopdev</value>
> </property>
> <property>
> <name>dfs.client.failover.proxy.provider.hadoopdev</name>
>
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
> </property>
> <property>
> <name>dfs.ha.fencing.methods</name>
> <value>shell(/bin/true)</value>
> </property>
> <property>
> <name>dfs.journalnode.edits.dir</name>
> <value>/data/hadoop/hadoop-data-2/journal</value>
> </property>
> <property>
> <name>dfs.ha.automatic-failover.enabled</name>
> <value>true</value>
> </property>
> <property>
> <name>ha.zookeeper.quorum</name>
> <value>fphd1.ctpilot1.com:2222</value>
> </property>
>
> -----Original Message-----
> From: Ted Yu [mailto:yuzhihong@gmail.com]
> Sent: Tuesday, November 26, 2013 1:56 AM
> To: user@hbase.apache.org
> Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC:
> RPC.stopProxy called on non proxy.
>
> Here is the caller to createReorderingProxy():
>
> ClientProtocol cp1 = createReorderingProxy(namenode, lrb, conf);
>
> where namenode
> is org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB :
>
> public class ClientNamenodeProtocolTranslatorPB implements
>
> ProtocolMetaInterface, ClientProtocol, Closeable,
> ProtocolTranslator {
>
> In createReorderingProxy() :
>
> new Class[]{ClientProtocol.class, Closeable.class},
>
> We ask for Closeable interface.
>
>
> Did the error persist after you replaced with the hadoop-hdfs-2.2.0.jar ?
> Meaning, did you start HBase using the new hadoop jars ?
>
> Cheers
>
>
> On Mon, Nov 25, 2013 at 1:04 PM, Henry Hung <YT...@winbond.com> wrote:
>
> > I looked into the source code of
> > org/apache/hadoop/hbase/fs/HFileSystem.java
> > and whenever I execute hbase-daemon.sh stop master (or
> > regionserver), the
> > method.getName() is "close",
> > but the org/apache/hadoop/hdfs/protocol/ClientProtocol.java do not
> > have method named "close", thus it result in error "object is not an
> > instance of declaring class"
> >
> > Could someone that familiar with hbase-0.96.0 hadoop2 can tell me if
> > this problem need to be fixed? And how to fix it?
> >
> > private static ClientProtocol createReorderingProxy(final
> > ClientProtocol cp, final ReorderBlocks lrb, final Configuration conf) {
> > return (ClientProtocol) Proxy.newProxyInstance
> > (cp.getClass().getClassLoader(),
> > new Class[]{ClientProtocol.class, Closeable.class},
> > new InvocationHandler() {
> > public Object invoke(Object proxy, Method method,
> > Object[] args) throws Throwable {
> > try {
> > // method.invoke will failed if
> > method.getName().equals("close")
> > // because ClientProtocol do not have method "close"
> > Object res = method.invoke(cp, args);
> > if (res != null && args != null && args.length == 3
> > && "getBlockLocations".equals(method.getName())
> > && res instanceof LocatedBlocks
> > && args[0] instanceof String
> > && args[0] != null) {
> > lrb.reorderBlocks(conf, (LocatedBlocks) res,
> > (String) args[0]);
> > }
> > return res;
> > } catch (InvocationTargetException ite) {
> > // We will have this for all the exception,
> > checked on not, sent
> > // by any layer, including the functional exception
> > Throwable cause = ite.getCause();
> > if (cause == null){
> > throw new RuntimeException(
> > "Proxy invocation failed and getCause is
> > null",
> ite);
> > }
> > if (cause instanceof UndeclaredThrowableException) {
> > Throwable causeCause = cause.getCause();
> > if (causeCause == null) {
> > throw new
> > RuntimeException("UndeclaredThrowableException had null cause!");
> > }
> > cause = cause.getCause();
> > }
> > throw cause;
> > }
> > }
> > });
> > }
> >
> >
> >
> > -----Original Message-----
> > From: MA11 YTHung1
> > Sent: Thursday, November 21, 2013 9:57 AM
> > To: user@hbase.apache.org
> > Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC:
> > RPC.stopProxy called on non proxy.
> >
> > Additional information:
> >
> > I replace all files with prefix hadoop in hbase-0.96.0-hadoop2/lib
> > with
> > hadoop-2.2.0 libraries.
> >
> > the ls -l of hbase-0.96.0-hadoop2/lib as below:
> >
> > -rw-r--r-- 1 hadoop users 62983 Sep 17 16:13 activation-1.1.jar
> > -rw-r--r-- 1 hadoop users 4467 Sep 17 23:29 aopalliance-1.0.jar
> > -rw-r--r-- 1 hadoop users 43033 Sep 17 16:13 asm-3.1.jar
> > -rw-r--r-- 1 hadoop users 263268 Sep 17 16:27 avro-1.5.3.jar
> > -rw-r--r-- 1 hadoop users 188671 Sep 17 16:12
> commons-beanutils-1.7.0.jar
> > -rw-r--r-- 1 hadoop users 206035 Sep 17 16:13
> > commons-beanutils-core-1.8.0.jar
> > -rw-r--r-- 1 hadoop users 41123 Sep 17 16:12 commons-cli-1.2.jar
> > -rw-r--r-- 1 hadoop users 259600 Sep 17 16:13 commons-codec-1.7.jar
> > -rw-r--r-- 1 hadoop users 575389 Sep 17 16:12
> > commons-collections-3.2.1.jar
> > -rw-r--r-- 1 hadoop users 238681 Sep 17 16:27 commons-compress-1.4.jar
> > -rw-r--r-- 1 hadoop users 298829 Sep 17 16:13
> > commons-configuration-1.6.jar
> > -rw-r--r-- 1 hadoop users 24239 Sep 17 23:28 commons-daemon-1.0.13.jar
> > -rw-r--r-- 1 hadoop users 143602 Sep 17 16:12 commons-digester-1.8.jar
> > -rw-r--r-- 1 hadoop users 112341 Sep 17 16:13 commons-el-1.0.jar
> > -rw-r--r-- 1 hadoop users 305001 Sep 17 16:12
> commons-httpclient-3.1.jar
> > -rw-r--r-- 1 hadoop users 185140 Sep 17 16:13 commons-io-2.4.jar
> > -rw-r--r-- 1 hadoop users 284220 Sep 17 16:12 commons-lang-2.6.jar
> > -rw-r--r-- 1 hadoop users 60686 Sep 17 16:12 commons-logging-1.1.1.jar
> > -rw-r--r-- 1 hadoop users 988514 Sep 17 16:13 commons-math-2.2.jar
> > -rw-r--r-- 1 hadoop users 273370 Sep 17 16:27 commons-net-3.1.jar
> > -rw-r--r-- 1 hadoop users 3566844 Sep 17 16:15 core-3.1.1.jar
> > -rw-r--r-- 1 hadoop users 15322 Sep 17 16:12
> > findbugs-annotations-1.3.9-1.jar
> > -rw-r--r-- 1 hadoop users 21817 Sep 17 23:29
> > gmbal-api-only-3.0.0-b023.jar
> > -rw-r--r-- 1 hadoop users 684337 Sep 17 23:29
> grizzly-framework-2.1.1.jar
> > -rw-r--r-- 1 hadoop users 210846 Sep 17 23:29
> > grizzly-framework-2.1.1-tests.jar
> > -rw-r--r-- 1 hadoop users 248346 Sep 17 23:29 grizzly-http-2.1.1.jar
> > -rw-r--r-- 1 hadoop users 193583 Sep 17 23:29
> > grizzly-http-server-2.1.1.jar
> > -rw-r--r-- 1 hadoop users 336878 Sep 17 23:29
> > grizzly-http-servlet-2.1.1.jar
> > -rw-r--r-- 1 hadoop users 8072 Sep 17 23:29 grizzly-rcm-2.1.1.jar
> > -rw-r--r-- 1 hadoop users 1795932 Sep 17 16:13 guava-12.0.1.jar
> > -rw-r--r-- 1 hadoop users 710492 Sep 17 23:29 guice-3.0.jar
> > -rw-r--r-- 1 hadoop users 65012 Sep 17 23:29 guice-servlet-3.0.jar
> > -rw-r--r-- 1 hadoop users 16778 Nov 20 17:39
> > hadoop-annotations-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 49750 Nov 20 17:40 hadoop-auth-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 2576 Oct 12 06:20
> > hadoop-client-2.1.0-beta.jar
> > -rw-r--r-- 1 hadoop users 2735584 Nov 20 17:50
> > hadoop-common-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 5242252 Nov 21 08:48
> > hadoop-hdfs-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 1988460 Nov 21 08:48
> hadoop-hdfs-2.2.0-tests.jar
> > -rw-r--r-- 1 hadoop users 482042 Nov 21 08:49
> > hadoop-mapreduce-client-app-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 656365 Nov 21 08:49
> > hadoop-mapreduce-client-common-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 1455001 Nov 21 08:50
> > hadoop-mapreduce-client-core-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 35216 Nov 21 08:50
> > hadoop-mapreduce-client-jobclient-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 1434852 Nov 21 08:50
> > hadoop-mapreduce-client-jobclient-2.2.0-tests.jar
> > -rw-r--r-- 1 hadoop users 21537 Nov 21 08:51
> > hadoop-mapreduce-client-shuffle-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 1158936 Nov 21 08:51 hadoop-yarn-api-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 94728 Nov 21 08:51
> > hadoop-yarn-client-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 1301627 Nov 21 08:51
> > hadoop-yarn-common-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 175554 Nov 21 08:52
> > hadoop-yarn-server-common-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 467638 Nov 21 08:52
> > hadoop-yarn-server-nodemanager-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 825853 Oct 12 06:28
> > hbase-client-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 354845 Oct 12 06:28
> > hbase-common-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 132690 Oct 12 06:28
> > hbase-common-0.96.0-hadoop2-tests.jar
> > -rw-r--r-- 1 hadoop users 97428 Oct 12 06:28
> > hbase-examples-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 72765 Oct 12 06:28
> > hbase-hadoop2-compat-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 32096 Oct 12 06:28
> > hbase-hadoop-compat-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 12174 Oct 12 06:28
> hbase-it-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 288784 Oct 12 06:28
> > hbase-it-0.96.0-hadoop2-tests.jar
> > -rw-r--r-- 1 hadoop users 94784 Oct 12 06:28
> > hbase-prefix-tree-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 3134214 Oct 12 06:28
> > hbase-protocol-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 3058804 Oct 12 06:28
> > hbase-server-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 3150292 Oct 12 06:28
> > hbase-server-0.96.0-hadoop2-tests.jar
> > -rw-r--r-- 1 hadoop users 12554 Oct 12 06:28
> > hbase-shell-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 10941 Oct 12 06:28
> > hbase-testing-util-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 2276333 Oct 12 06:28
> > hbase-thrift-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 95975 Sep 17 16:15 high-scale-lib-1.1.1.jar
> > -rw-r--r-- 1 hadoop users 31020 Sep 17 16:14 htrace-core-2.01.jar
> > -rw-r--r-- 1 hadoop users 352585 Sep 17 16:15 httpclient-4.1.3.jar
> > -rw-r--r-- 1 hadoop users 181201 Sep 17 16:15 httpcore-4.1.3.jar
> > -rw-r--r-- 1 hadoop users 227517 Sep 17 16:13
> jackson-core-asl-1.8.8.jar
> > -rw-r--r-- 1 hadoop users 17884 Sep 17 16:13 jackson-jaxrs-1.8.8.jar
> > -rw-r--r-- 1 hadoop users 669065 Sep 17 16:13
> > jackson-mapper-asl-1.8.8.jar
> > -rw-r--r-- 1 hadoop users 32353 Sep 17 16:13 jackson-xc-1.8.8.jar
> > -rw-r--r-- 1 hadoop users 20642 Sep 17 16:15 jamon-runtime-2.3.1.jar
> > -rw-r--r-- 1 hadoop users 408133 Sep 17 16:13
> jasper-compiler-5.5.23.jar
> > -rw-r--r-- 1 hadoop users 76844 Sep 17 16:13 jasper-runtime-5.5.23.jar
> > -rw-r--r-- 1 hadoop users 2497 Sep 17 23:29 javax.inject-1.jar
> > -rw-r--r-- 1 hadoop users 83586 Sep 17 23:29 javax.servlet-3.0.jar
> > -rw-r--r-- 1 hadoop users 105134 Sep 17 16:27 jaxb-api-2.2.2.jar
> > -rw-r--r-- 1 hadoop users 890168 Sep 17 16:13 jaxb-impl-2.2.3-1.jar
> > -rw-r--r-- 1 hadoop users 129217 Sep 17 23:29 jersey-client-1.8.jar
> > -rw-r--r-- 1 hadoop users 458233 Sep 17 16:13 jersey-core-1.8.jar
> > -rw-r--r-- 1 hadoop users 17585 Sep 17 23:29 jersey-grizzly2-1.8.jar
> > -rw-r--r-- 1 hadoop users 14712 Sep 17 23:29 jersey-guice-1.8.jar
> > -rw-r--r-- 1 hadoop users 147933 Sep 17 16:13 jersey-json-1.8.jar
> > -rw-r--r-- 1 hadoop users 694352 Sep 17 16:13 jersey-server-1.8.jar
> > -rw-r--r-- 1 hadoop users 28034 Sep 17 23:29
> > jersey-test-framework-core-1.8.jar
> > -rw-r--r-- 1 hadoop users 12907 Sep 17 23:29
> > jersey-test-framework-grizzly2-1.8.jar
> > -rw-r--r-- 1 hadoop users 321806 Sep 17 16:27 jets3t-0.6.1.jar
> > -rw-r--r-- 1 hadoop users 75963 Sep 17 16:13 jettison-1.3.1.jar
> > -rw-r--r-- 1 hadoop users 539912 Sep 17 16:13 jetty-6.1.26.jar
> > -rw-r--r-- 1 hadoop users 18891 Sep 17 16:15
> jetty-sslengine-6.1.26.jar
> > -rw-r--r-- 1 hadoop users 177131 Sep 17 16:13 jetty-util-6.1.26.jar
> > -rw-r--r-- 1 hadoop users 13832273 Sep 17 16:15 jruby-complete-1.6.8.jar
> > -rw-r--r-- 1 hadoop users 185746 Sep 17 16:27 jsch-0.1.42.jar
> > -rw-r--r-- 1 hadoop users 1024680 Sep 17 16:13 jsp-2.1-6.1.14.jar
> > -rw-r--r-- 1 hadoop users 134910 Sep 17 16:13 jsp-api-2.1-6.1.14.jar
> > -rw-r--r-- 1 hadoop users 100636 Sep 17 16:27 jsp-api-2.1.jar
> > -rw-r--r-- 1 hadoop users 33015 Sep 17 16:13 jsr305-1.3.9.jar
> > -rw-r--r-- 1 hadoop users 245039 Sep 17 16:12 junit-4.11.jar
> > -rw-r--r-- 1 hadoop users 347531 Sep 17 16:15 libthrift-0.9.0.jar
> > -rw-r--r-- 1 hadoop users 489884 Sep 17 16:12 log4j-1.2.17.jar
> > -rw-r--r-- 1 hadoop users 42212 Sep 17 23:29
> > management-api-3.0.0-b012.jar
> > -rw-r--r-- 1 hadoop users 82445 Sep 17 16:14 metrics-core-2.1.2.jar
> > drwxr-xr-x 3 hadoop users 4096 Nov 21 09:10 native
> > -rw-r--r-- 1 hadoop users 1206119 Sep 18 04:00 netty-3.6.6.Final.jar
> > -rw-r--r-- 1 hadoop users 29555 Sep 17 16:27 paranamer-2.3.jar
> > -rw-r--r-- 1 hadoop users 533455 Sep 17 16:13 protobuf-java-2.5.0.jar
> > drwxr-xr-x 5 hadoop users 4096 Sep 28 10:37 ruby
> > -rw-r--r-- 1 hadoop users 132368 Sep 17 16:13
> servlet-api-2.5-6.1.14.jar
> > -rw-r--r-- 1 hadoop users 105112 Sep 17 16:12 servlet-api-2.5.jar
> > -rw-r--r-- 1 hadoop users 25962 Sep 17 16:14 slf4j-api-1.6.4.jar
> > -rw-r--r-- 1 hadoop users 9748 Oct 3 07:15 slf4j-log4j12-1.6.4.jar
> > -rw-r--r-- 1 hadoop users 995720 Sep 17 16:27 snappy-java-1.0.3.2.jar
> > -rw-r--r-- 1 hadoop users 26514 Sep 17 16:13 stax-api-1.0.1.jar
> > -rw-r--r-- 1 hadoop users 15010 Sep 17 16:13 xmlenc-0.52.jar
> > -rw-r--r-- 1 hadoop users 94672 Sep 17 16:27 xz-1.0.jar
> > -rw-r--r-- 1 hadoop users 779974 Sep 17 16:14 zookeeper-3.4.5.jar
> >
> > Best regards,
> > Henry
> >
> > -----Original Message-----
> > From: MA11 YTHung1
> > Sent: Thursday, November 21, 2013 9:51 AM
> > To: user@hbase.apache.org
> > Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC:
> > RPC.stopProxy called on non proxy.
> >
> > I'm using hadoop-2.2.0 stable
> >
> > -----Original Message-----
> > From: Jimmy Xiang [mailto:jxiang@cloudera.com]
> > Sent: Thursday, November 21, 2013 9:49 AM
> > To: user
> > Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC:
> > RPC.stopProxy called on non proxy.
> >
> > Which version of Hadoop do you use?
> >
> >
> > On Wed, Nov 20, 2013 at 5:43 PM, Henry Hung <YT...@winbond.com> wrote:
> >
> > > Hi All,
> > >
> > > When stopping master or regionserver, I found some ERROR and WARN
> > > in the log files, are these errors can cause problem in hbase:
> > >
> > > 13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
> > > 13/11/21 09:35:36 ERROR ipc.RPC: RPC.stopProxy called on non proxy.
> > > java.lang.IllegalArgumentException: object is not an instance of
> > > declaring class
> > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > at
> > >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl
> > .j
> > ava:39)
> > > at
> > >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcce
> > ss
> > orImpl.java:25)
> > > at java.lang.reflect.Method.invoke(Method.java:597)
> > > at
> > > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
> > > at $Proxy18.close(Unknown Source)
> > > at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:621)
> > > at
> > >
> > org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient
> > .j
> > ava:738)
> > > at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> > > at
> > >
> > org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSy
> > st
> > em.java:847)
> > > at
> > > org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> > > at
> > >
> > org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem
> > .j
> > ava:2541)
> > > at
> > > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManag
> > > er
> > > .j
> > > ava:54)
> > > 13/11/21 09:35:36 WARN util.ShutdownHookManager: ShutdownHook
> > > 'ClientFinalizer' failed,
> > org.apache.hadoop.HadoopIllegalArgumentException:
> > > Cannot close proxy - is not Closeable or does not provide
> > > closeable invocation handler class $Proxy18
> > > org.apache.hadoop.HadoopIllegalArgumentException: Cannot close
> > > proxy
> > > - is not Closeable or does not provide closeable invocation
> > > handler class
> > > $Proxy18
> > > at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:639)
> > > at
> > >
> > org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient
> > .j
> > ava:738)
> > > at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> > > at
> > >
> > org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSy
> > st
> > em.java:847)
> > > at
> > > org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> > > at
> > >
> > org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem
> > .j
> > ava:2541)
> > > at
> > > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManag
> > > er
> > > .j
> > > ava:54)
> > >
> > > Best regards,
> > > Henry
> > >
> > > ________________________________
> > > The privileged confidential information contained in this email is
> > > intended for use only by the addressees as indicated by the
> > > original sender of this email. If you are not the addressee
> > > indicated in this email or are not responsible for delivery of the
> > > email to such a person, please kindly reply to the sender
> > > indicating this fact and delete all copies of it from your
> > > computer and network server immediately. Your cooperation is
> > > highly appreciated. It is advised that any unauthorized use of
> > > confidential information of Winbond is strictly prohibited; and
> > > any information in this email irrelevant to the official business
> > > of Winbond shall be deemed as neither given nor
> > endorsed by Winbond.
> > >
> >
> > The privileged confidential information contained in this email is
> > intended for use only by the addressees as indicated by the original
> > sender of this email. If you are not the addressee indicated in this
> > email or are not responsible for delivery of the email to such a
> > person, please kindly reply to the sender indicating this fact and
> > delete all copies of it from your computer and network server
> > immediately. Your cooperation is highly appreciated. It is advised
> > that any unauthorized use of confidential information of Winbond is
> > strictly prohibited; and any information in this email irrelevant to
> > the official business of Winbond shall be deemed as neither given
> > nor
> endorsed by Winbond.
> >
> > The privileged confidential information contained in this email is
> > intended for use only by the addressees as indicated by the original
> > sender of this email. If you are not the addressee indicated in this
> > email or are not responsible for delivery of the email to such a
> > person, please kindly reply to the sender indicating this fact and
> > delete all copies of it from your computer and network server
> > immediately. Your cooperation is highly appreciated. It is advised
> > that any unauthorized use of confidential information of Winbond is
> > strictly prohibited; and any information in this email irrelevant to
> > the official business of Winbond shall be deemed as neither given
> > nor
> endorsed by Winbond.
> >
> > The privileged confidential information contained in this email is
> > intended for use only by the addressees as indicated by the original
> > sender of this email. If you are not the addressee indicated in this
> > email or are not responsible for delivery of the email to such a
> > person, please kindly reply to the sender indicating this fact and
> > delete all copies of it from your computer and network server
> > immediately. Your cooperation is highly appreciated. It is advised
> > that any unauthorized use of confidential information of Winbond is
> > strictly prohibited; and any information in this email irrelevant to
> > the official business of Winbond shall be deemed as neither given
> > nor
> endorsed by Winbond.
> >
>
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original
> sender of this email. If you are not the addressee indicated in this
> email or are not responsible for delivery of the email to such a
> person, please kindly reply to the sender indicating this fact and
> delete all copies of it from your computer and network server
> immediately. Your cooperation is highly appreciated. It is advised
> that any unauthorized use of confidential information of Winbond is
> strictly prohibited; and any information in this email irrelevant to
> the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
>
The privileged confidential information contained in this email is intended for use only by the addressees as indicated by the original sender of this email. If you are not the addressee indicated in this email or are not responsible for delivery of the email to such a person, please kindly reply to the sender indicating this fact and delete all copies of it from your computer and network server immediately. Your cooperation is highly appreciated. It is advised that any unauthorized use of confidential information of Winbond is strictly prohibited; and any information in this email irrelevant to the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
called on non proxy.
Posted by Ted Yu <yu...@gmail.com>.
Henry:
Thanks for the additional information.
Looks like HA namenode with QJM is not covered by current code.
Mind filing a JIRA with summary of this thread ?
Cheers
On Tue, Nov 26, 2013 at 9:12 AM, Henry Hung <YT...@winbond.com> wrote:
> @Ted
> Yes, I use the hadoop-hdfs-2.2.0.jar.
>
> BTW, how do you certain that the namenode class is
> ClientNamenodeProtocolTranslatorPB?
>
> From the NameNodeProxies, I can only assume the
> ClientNamenodeProtocolTranslatorPB is used only when connecting to single
> hadoop namenode.
>
> public static <T> ProxyAndInfo<T> createNonHAProxy(
> Configuration conf, InetSocketAddress nnAddr, Class<T> xface,
> UserGroupInformation ugi, boolean withRetries) throws IOException {
> Text dtService = SecurityUtil.buildTokenService(nnAddr);
>
> T proxy;
> if (xface == ClientProtocol.class) {
> proxy = (T) createNNProxyWithClientProtocol(nnAddr, conf, ugi,
> withRetries);
>
>
> But I'm using HA configuration using QJM, so the my guess is the
> createProxy will go to the HA case because I provide
> failoverProxyProviderClass with
> "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider".
>
> public static <T> ProxyAndInfo<T> createProxy(Configuration conf,
> URI nameNodeUri, Class<T> xface) throws IOException {
> Class<FailoverProxyProvider<T>> failoverProxyProviderClass =
> getFailoverProxyProviderClass(conf, nameNodeUri, xface);
>
> if (failoverProxyProviderClass == null) {
> // Non-HA case
> return createNonHAProxy(conf, NameNode.getAddress(nameNodeUri),
> xface,
> UserGroupInformation.getCurrentUser(), true);
> } else {
> // HA case
> FailoverProxyProvider<T> failoverProxyProvider = NameNodeProxies
> .createFailoverProxyProvider(conf, failoverProxyProviderClass,
> xface,
> nameNodeUri);
> Conf config = new Conf(conf);
> T proxy = (T) RetryProxy.create(xface, failoverProxyProvider,
> RetryPolicies
> .failoverOnNetworkException(RetryPolicies.TRY_ONCE_THEN_FAIL,
> config.maxFailoverAttempts, config.failoverSleepBaseMillis,
> config.failoverSleepMaxMillis));
>
> Text dtService = HAUtil.buildTokenServiceForLogicalUri(nameNodeUri);
> return new ProxyAndInfo<T>(proxy, dtService);
> }
> }
>
> Here is the snippet of my hdfs-site.xml:
>
> <property>
> <name>dfs.nameservices</name>
> <value>hadoopdev</value>
> </property>
> <property>
> <name>dfs.ha.namenodes.hadoopdev</name>
> <value>nn1,nn2</value>
> </property>
> <property>
> <name>dfs.namenode.rpc-address.hadoopdev.nn1</name>
> <value>fphd9.ctpilot1.com:9000</value>
> </property>
> <property>
> <name>dfs.namenode.http-address.hadoopdev.nn1</name>
> <value>fphd9.ctpilot1.com:50070</value>
> </property>
> <property>
> <name>dfs.namenode.rpc-address.hadoopdev.nn2</name>
> <value>fphd10.ctpilot1.com:9000</value>
> </property>
> <property>
> <name>dfs.namenode.http-address.hadoopdev.nn2</name>
> <value>fphd10.ctpilot1.com:50070</value>
> </property>
> <property>
> <name>dfs.namenode.shared.edits.dir</name>
> <value>qjournal://fphd8.ctpilot1.com:8485;fphd9.ctpilot1.com:8485;
> fphd10.ctpilot1.com:8485/hadoopdev</value>
> </property>
> <property>
> <name>dfs.client.failover.proxy.provider.hadoopdev</name>
>
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
> </property>
> <property>
> <name>dfs.ha.fencing.methods</name>
> <value>shell(/bin/true)</value>
> </property>
> <property>
> <name>dfs.journalnode.edits.dir</name>
> <value>/data/hadoop/hadoop-data-2/journal</value>
> </property>
> <property>
> <name>dfs.ha.automatic-failover.enabled</name>
> <value>true</value>
> </property>
> <property>
> <name>ha.zookeeper.quorum</name>
> <value>fphd1.ctpilot1.com:2222</value>
> </property>
>
> -----Original Message-----
> From: Ted Yu [mailto:yuzhihong@gmail.com]
> Sent: Tuesday, November 26, 2013 1:56 AM
> To: user@hbase.apache.org
> Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
> called on non proxy.
>
> Here is the caller to createReorderingProxy():
>
> ClientProtocol cp1 = createReorderingProxy(namenode, lrb, conf);
>
> where namenode
> is org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB :
>
> public class ClientNamenodeProtocolTranslatorPB implements
>
> ProtocolMetaInterface, ClientProtocol, Closeable, ProtocolTranslator {
>
> In createReorderingProxy() :
>
> new Class[]{ClientProtocol.class, Closeable.class},
>
> We ask for Closeable interface.
>
>
> Did the error persist after you replaced with the hadoop-hdfs-2.2.0.jar ?
> Meaning, did you start HBase using the new hadoop jars ?
>
> Cheers
>
>
> On Mon, Nov 25, 2013 at 1:04 PM, Henry Hung <YT...@winbond.com> wrote:
>
> > I looked into the source code of
> > org/apache/hadoop/hbase/fs/HFileSystem.java
> > and whenever I execute hbase-daemon.sh stop master (or regionserver),
> > the
> > method.getName() is "close",
> > but the org/apache/hadoop/hdfs/protocol/ClientProtocol.java do not
> > have method named "close", thus it result in error "object is not an
> > instance of declaring class"
> >
> > Could someone that familiar with hbase-0.96.0 hadoop2 can tell me if
> > this problem need to be fixed? And how to fix it?
> >
> > private static ClientProtocol createReorderingProxy(final
> > ClientProtocol cp, final ReorderBlocks lrb, final Configuration conf) {
> > return (ClientProtocol) Proxy.newProxyInstance
> > (cp.getClass().getClassLoader(),
> > new Class[]{ClientProtocol.class, Closeable.class},
> > new InvocationHandler() {
> > public Object invoke(Object proxy, Method method,
> > Object[] args) throws Throwable {
> > try {
> > // method.invoke will failed if
> > method.getName().equals("close")
> > // because ClientProtocol do not have method "close"
> > Object res = method.invoke(cp, args);
> > if (res != null && args != null && args.length == 3
> > && "getBlockLocations".equals(method.getName())
> > && res instanceof LocatedBlocks
> > && args[0] instanceof String
> > && args[0] != null) {
> > lrb.reorderBlocks(conf, (LocatedBlocks) res,
> > (String) args[0]);
> > }
> > return res;
> > } catch (InvocationTargetException ite) {
> > // We will have this for all the exception, checked
> > on not, sent
> > // by any layer, including the functional exception
> > Throwable cause = ite.getCause();
> > if (cause == null){
> > throw new RuntimeException(
> > "Proxy invocation failed and getCause is null",
> ite);
> > }
> > if (cause instanceof UndeclaredThrowableException) {
> > Throwable causeCause = cause.getCause();
> > if (causeCause == null) {
> > throw new
> > RuntimeException("UndeclaredThrowableException had null cause!");
> > }
> > cause = cause.getCause();
> > }
> > throw cause;
> > }
> > }
> > });
> > }
> >
> >
> >
> > -----Original Message-----
> > From: MA11 YTHung1
> > Sent: Thursday, November 21, 2013 9:57 AM
> > To: user@hbase.apache.org
> > Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC:
> > RPC.stopProxy called on non proxy.
> >
> > Additional information:
> >
> > I replace all files with prefix hadoop in hbase-0.96.0-hadoop2/lib
> > with
> > hadoop-2.2.0 libraries.
> >
> > the ls -l of hbase-0.96.0-hadoop2/lib as below:
> >
> > -rw-r--r-- 1 hadoop users 62983 Sep 17 16:13 activation-1.1.jar
> > -rw-r--r-- 1 hadoop users 4467 Sep 17 23:29 aopalliance-1.0.jar
> > -rw-r--r-- 1 hadoop users 43033 Sep 17 16:13 asm-3.1.jar
> > -rw-r--r-- 1 hadoop users 263268 Sep 17 16:27 avro-1.5.3.jar
> > -rw-r--r-- 1 hadoop users 188671 Sep 17 16:12
> commons-beanutils-1.7.0.jar
> > -rw-r--r-- 1 hadoop users 206035 Sep 17 16:13
> > commons-beanutils-core-1.8.0.jar
> > -rw-r--r-- 1 hadoop users 41123 Sep 17 16:12 commons-cli-1.2.jar
> > -rw-r--r-- 1 hadoop users 259600 Sep 17 16:13 commons-codec-1.7.jar
> > -rw-r--r-- 1 hadoop users 575389 Sep 17 16:12
> > commons-collections-3.2.1.jar
> > -rw-r--r-- 1 hadoop users 238681 Sep 17 16:27 commons-compress-1.4.jar
> > -rw-r--r-- 1 hadoop users 298829 Sep 17 16:13
> > commons-configuration-1.6.jar
> > -rw-r--r-- 1 hadoop users 24239 Sep 17 23:28 commons-daemon-1.0.13.jar
> > -rw-r--r-- 1 hadoop users 143602 Sep 17 16:12 commons-digester-1.8.jar
> > -rw-r--r-- 1 hadoop users 112341 Sep 17 16:13 commons-el-1.0.jar
> > -rw-r--r-- 1 hadoop users 305001 Sep 17 16:12
> commons-httpclient-3.1.jar
> > -rw-r--r-- 1 hadoop users 185140 Sep 17 16:13 commons-io-2.4.jar
> > -rw-r--r-- 1 hadoop users 284220 Sep 17 16:12 commons-lang-2.6.jar
> > -rw-r--r-- 1 hadoop users 60686 Sep 17 16:12 commons-logging-1.1.1.jar
> > -rw-r--r-- 1 hadoop users 988514 Sep 17 16:13 commons-math-2.2.jar
> > -rw-r--r-- 1 hadoop users 273370 Sep 17 16:27 commons-net-3.1.jar
> > -rw-r--r-- 1 hadoop users 3566844 Sep 17 16:15 core-3.1.1.jar
> > -rw-r--r-- 1 hadoop users 15322 Sep 17 16:12
> > findbugs-annotations-1.3.9-1.jar
> > -rw-r--r-- 1 hadoop users 21817 Sep 17 23:29
> > gmbal-api-only-3.0.0-b023.jar
> > -rw-r--r-- 1 hadoop users 684337 Sep 17 23:29
> grizzly-framework-2.1.1.jar
> > -rw-r--r-- 1 hadoop users 210846 Sep 17 23:29
> > grizzly-framework-2.1.1-tests.jar
> > -rw-r--r-- 1 hadoop users 248346 Sep 17 23:29 grizzly-http-2.1.1.jar
> > -rw-r--r-- 1 hadoop users 193583 Sep 17 23:29
> > grizzly-http-server-2.1.1.jar
> > -rw-r--r-- 1 hadoop users 336878 Sep 17 23:29
> > grizzly-http-servlet-2.1.1.jar
> > -rw-r--r-- 1 hadoop users 8072 Sep 17 23:29 grizzly-rcm-2.1.1.jar
> > -rw-r--r-- 1 hadoop users 1795932 Sep 17 16:13 guava-12.0.1.jar
> > -rw-r--r-- 1 hadoop users 710492 Sep 17 23:29 guice-3.0.jar
> > -rw-r--r-- 1 hadoop users 65012 Sep 17 23:29 guice-servlet-3.0.jar
> > -rw-r--r-- 1 hadoop users 16778 Nov 20 17:39
> > hadoop-annotations-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 49750 Nov 20 17:40 hadoop-auth-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 2576 Oct 12 06:20
> > hadoop-client-2.1.0-beta.jar
> > -rw-r--r-- 1 hadoop users 2735584 Nov 20 17:50
> > hadoop-common-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 5242252 Nov 21 08:48 hadoop-hdfs-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 1988460 Nov 21 08:48
> hadoop-hdfs-2.2.0-tests.jar
> > -rw-r--r-- 1 hadoop users 482042 Nov 21 08:49
> > hadoop-mapreduce-client-app-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 656365 Nov 21 08:49
> > hadoop-mapreduce-client-common-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 1455001 Nov 21 08:50
> > hadoop-mapreduce-client-core-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 35216 Nov 21 08:50
> > hadoop-mapreduce-client-jobclient-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 1434852 Nov 21 08:50
> > hadoop-mapreduce-client-jobclient-2.2.0-tests.jar
> > -rw-r--r-- 1 hadoop users 21537 Nov 21 08:51
> > hadoop-mapreduce-client-shuffle-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 1158936 Nov 21 08:51 hadoop-yarn-api-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 94728 Nov 21 08:51
> > hadoop-yarn-client-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 1301627 Nov 21 08:51
> > hadoop-yarn-common-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 175554 Nov 21 08:52
> > hadoop-yarn-server-common-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 467638 Nov 21 08:52
> > hadoop-yarn-server-nodemanager-2.2.0.jar
> > -rw-r--r-- 1 hadoop users 825853 Oct 12 06:28
> > hbase-client-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 354845 Oct 12 06:28
> > hbase-common-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 132690 Oct 12 06:28
> > hbase-common-0.96.0-hadoop2-tests.jar
> > -rw-r--r-- 1 hadoop users 97428 Oct 12 06:28
> > hbase-examples-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 72765 Oct 12 06:28
> > hbase-hadoop2-compat-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 32096 Oct 12 06:28
> > hbase-hadoop-compat-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 12174 Oct 12 06:28
> hbase-it-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 288784 Oct 12 06:28
> > hbase-it-0.96.0-hadoop2-tests.jar
> > -rw-r--r-- 1 hadoop users 94784 Oct 12 06:28
> > hbase-prefix-tree-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 3134214 Oct 12 06:28
> > hbase-protocol-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 3058804 Oct 12 06:28
> > hbase-server-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 3150292 Oct 12 06:28
> > hbase-server-0.96.0-hadoop2-tests.jar
> > -rw-r--r-- 1 hadoop users 12554 Oct 12 06:28
> > hbase-shell-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 10941 Oct 12 06:28
> > hbase-testing-util-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 2276333 Oct 12 06:28
> > hbase-thrift-0.96.0-hadoop2.jar
> > -rw-r--r-- 1 hadoop users 95975 Sep 17 16:15 high-scale-lib-1.1.1.jar
> > -rw-r--r-- 1 hadoop users 31020 Sep 17 16:14 htrace-core-2.01.jar
> > -rw-r--r-- 1 hadoop users 352585 Sep 17 16:15 httpclient-4.1.3.jar
> > -rw-r--r-- 1 hadoop users 181201 Sep 17 16:15 httpcore-4.1.3.jar
> > -rw-r--r-- 1 hadoop users 227517 Sep 17 16:13
> jackson-core-asl-1.8.8.jar
> > -rw-r--r-- 1 hadoop users 17884 Sep 17 16:13 jackson-jaxrs-1.8.8.jar
> > -rw-r--r-- 1 hadoop users 669065 Sep 17 16:13
> > jackson-mapper-asl-1.8.8.jar
> > -rw-r--r-- 1 hadoop users 32353 Sep 17 16:13 jackson-xc-1.8.8.jar
> > -rw-r--r-- 1 hadoop users 20642 Sep 17 16:15 jamon-runtime-2.3.1.jar
> > -rw-r--r-- 1 hadoop users 408133 Sep 17 16:13
> jasper-compiler-5.5.23.jar
> > -rw-r--r-- 1 hadoop users 76844 Sep 17 16:13 jasper-runtime-5.5.23.jar
> > -rw-r--r-- 1 hadoop users 2497 Sep 17 23:29 javax.inject-1.jar
> > -rw-r--r-- 1 hadoop users 83586 Sep 17 23:29 javax.servlet-3.0.jar
> > -rw-r--r-- 1 hadoop users 105134 Sep 17 16:27 jaxb-api-2.2.2.jar
> > -rw-r--r-- 1 hadoop users 890168 Sep 17 16:13 jaxb-impl-2.2.3-1.jar
> > -rw-r--r-- 1 hadoop users 129217 Sep 17 23:29 jersey-client-1.8.jar
> > -rw-r--r-- 1 hadoop users 458233 Sep 17 16:13 jersey-core-1.8.jar
> > -rw-r--r-- 1 hadoop users 17585 Sep 17 23:29 jersey-grizzly2-1.8.jar
> > -rw-r--r-- 1 hadoop users 14712 Sep 17 23:29 jersey-guice-1.8.jar
> > -rw-r--r-- 1 hadoop users 147933 Sep 17 16:13 jersey-json-1.8.jar
> > -rw-r--r-- 1 hadoop users 694352 Sep 17 16:13 jersey-server-1.8.jar
> > -rw-r--r-- 1 hadoop users 28034 Sep 17 23:29
> > jersey-test-framework-core-1.8.jar
> > -rw-r--r-- 1 hadoop users 12907 Sep 17 23:29
> > jersey-test-framework-grizzly2-1.8.jar
> > -rw-r--r-- 1 hadoop users 321806 Sep 17 16:27 jets3t-0.6.1.jar
> > -rw-r--r-- 1 hadoop users 75963 Sep 17 16:13 jettison-1.3.1.jar
> > -rw-r--r-- 1 hadoop users 539912 Sep 17 16:13 jetty-6.1.26.jar
> > -rw-r--r-- 1 hadoop users 18891 Sep 17 16:15
> jetty-sslengine-6.1.26.jar
> > -rw-r--r-- 1 hadoop users 177131 Sep 17 16:13 jetty-util-6.1.26.jar
> > -rw-r--r-- 1 hadoop users 13832273 Sep 17 16:15 jruby-complete-1.6.8.jar
> > -rw-r--r-- 1 hadoop users 185746 Sep 17 16:27 jsch-0.1.42.jar
> > -rw-r--r-- 1 hadoop users 1024680 Sep 17 16:13 jsp-2.1-6.1.14.jar
> > -rw-r--r-- 1 hadoop users 134910 Sep 17 16:13 jsp-api-2.1-6.1.14.jar
> > -rw-r--r-- 1 hadoop users 100636 Sep 17 16:27 jsp-api-2.1.jar
> > -rw-r--r-- 1 hadoop users 33015 Sep 17 16:13 jsr305-1.3.9.jar
> > -rw-r--r-- 1 hadoop users 245039 Sep 17 16:12 junit-4.11.jar
> > -rw-r--r-- 1 hadoop users 347531 Sep 17 16:15 libthrift-0.9.0.jar
> > -rw-r--r-- 1 hadoop users 489884 Sep 17 16:12 log4j-1.2.17.jar
> > -rw-r--r-- 1 hadoop users 42212 Sep 17 23:29
> > management-api-3.0.0-b012.jar
> > -rw-r--r-- 1 hadoop users 82445 Sep 17 16:14 metrics-core-2.1.2.jar
> > drwxr-xr-x 3 hadoop users 4096 Nov 21 09:10 native
> > -rw-r--r-- 1 hadoop users 1206119 Sep 18 04:00 netty-3.6.6.Final.jar
> > -rw-r--r-- 1 hadoop users 29555 Sep 17 16:27 paranamer-2.3.jar
> > -rw-r--r-- 1 hadoop users 533455 Sep 17 16:13 protobuf-java-2.5.0.jar
> > drwxr-xr-x 5 hadoop users 4096 Sep 28 10:37 ruby
> > -rw-r--r-- 1 hadoop users 132368 Sep 17 16:13
> servlet-api-2.5-6.1.14.jar
> > -rw-r--r-- 1 hadoop users 105112 Sep 17 16:12 servlet-api-2.5.jar
> > -rw-r--r-- 1 hadoop users 25962 Sep 17 16:14 slf4j-api-1.6.4.jar
> > -rw-r--r-- 1 hadoop users 9748 Oct 3 07:15 slf4j-log4j12-1.6.4.jar
> > -rw-r--r-- 1 hadoop users 995720 Sep 17 16:27 snappy-java-1.0.3.2.jar
> > -rw-r--r-- 1 hadoop users 26514 Sep 17 16:13 stax-api-1.0.1.jar
> > -rw-r--r-- 1 hadoop users 15010 Sep 17 16:13 xmlenc-0.52.jar
> > -rw-r--r-- 1 hadoop users 94672 Sep 17 16:27 xz-1.0.jar
> > -rw-r--r-- 1 hadoop users 779974 Sep 17 16:14 zookeeper-3.4.5.jar
> >
> > Best regards,
> > Henry
> >
> > -----Original Message-----
> > From: MA11 YTHung1
> > Sent: Thursday, November 21, 2013 9:51 AM
> > To: user@hbase.apache.org
> > Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC:
> > RPC.stopProxy called on non proxy.
> >
> > I'm using hadoop-2.2.0 stable
> >
> > -----Original Message-----
> > From: Jimmy Xiang [mailto:jxiang@cloudera.com]
> > Sent: Thursday, November 21, 2013 9:49 AM
> > To: user
> > Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC:
> > RPC.stopProxy called on non proxy.
> >
> > Which version of Hadoop do you use?
> >
> >
> > On Wed, Nov 20, 2013 at 5:43 PM, Henry Hung <YT...@winbond.com> wrote:
> >
> > > Hi All,
> > >
> > > When stopping master or regionserver, I found some ERROR and WARN in
> > > the log files, are these errors can cause problem in hbase:
> > >
> > > 13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
> > > 13/11/21 09:35:36 ERROR ipc.RPC: RPC.stopProxy called on non proxy.
> > > java.lang.IllegalArgumentException: object is not an instance of
> > > declaring class
> > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > at
> > >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> > ava:39)
> > > at
> > >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess
> > orImpl.java:25)
> > > at java.lang.reflect.Method.invoke(Method.java:597)
> > > at
> > > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
> > > at $Proxy18.close(Unknown Source)
> > > at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:621)
> > > at
> > >
> > org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.j
> > ava:738)
> > > at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> > > at
> > >
> > org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSyst
> > em.java:847)
> > > at
> > > org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> > > at
> > >
> > org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.j
> > ava:2541)
> > > at
> > > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager
> > > .j
> > > ava:54)
> > > 13/11/21 09:35:36 WARN util.ShutdownHookManager: ShutdownHook
> > > 'ClientFinalizer' failed,
> > org.apache.hadoop.HadoopIllegalArgumentException:
> > > Cannot close proxy - is not Closeable or does not provide closeable
> > > invocation handler class $Proxy18
> > > org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy
> > > - is not Closeable or does not provide closeable invocation handler
> > > class
> > > $Proxy18
> > > at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:639)
> > > at
> > >
> > org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.j
> > ava:738)
> > > at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> > > at
> > >
> > org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSyst
> > em.java:847)
> > > at
> > > org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> > > at
> > >
> > org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.j
> > ava:2541)
> > > at
> > > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager
> > > .j
> > > ava:54)
> > >
> > > Best regards,
> > > Henry
> > >
> > > ________________________________
> > > The privileged confidential information contained in this email is
> > > intended for use only by the addressees as indicated by the original
> > > sender of this email. If you are not the addressee indicated in this
> > > email or are not responsible for delivery of the email to such a
> > > person, please kindly reply to the sender indicating this fact and
> > > delete all copies of it from your computer and network server
> > > immediately. Your cooperation is highly appreciated. It is advised
> > > that any unauthorized use of confidential information of Winbond is
> > > strictly prohibited; and any information in this email irrelevant to
> > > the official business of Winbond shall be deemed as neither given
> > > nor
> > endorsed by Winbond.
> > >
> >
> > The privileged confidential information contained in this email is
> > intended for use only by the addressees as indicated by the original
> > sender of this email. If you are not the addressee indicated in this
> > email or are not responsible for delivery of the email to such a
> > person, please kindly reply to the sender indicating this fact and
> > delete all copies of it from your computer and network server
> > immediately. Your cooperation is highly appreciated. It is advised
> > that any unauthorized use of confidential information of Winbond is
> > strictly prohibited; and any information in this email irrelevant to
> > the official business of Winbond shall be deemed as neither given nor
> endorsed by Winbond.
> >
> > The privileged confidential information contained in this email is
> > intended for use only by the addressees as indicated by the original
> > sender of this email. If you are not the addressee indicated in this
> > email or are not responsible for delivery of the email to such a
> > person, please kindly reply to the sender indicating this fact and
> > delete all copies of it from your computer and network server
> > immediately. Your cooperation is highly appreciated. It is advised
> > that any unauthorized use of confidential information of Winbond is
> > strictly prohibited; and any information in this email irrelevant to
> > the official business of Winbond shall be deemed as neither given nor
> endorsed by Winbond.
> >
> > The privileged confidential information contained in this email is
> > intended for use only by the addressees as indicated by the original
> > sender of this email. If you are not the addressee indicated in this
> > email or are not responsible for delivery of the email to such a
> > person, please kindly reply to the sender indicating this fact and
> > delete all copies of it from your computer and network server
> > immediately. Your cooperation is highly appreciated. It is advised
> > that any unauthorized use of confidential information of Winbond is
> > strictly prohibited; and any information in this email irrelevant to
> > the official business of Winbond shall be deemed as neither given nor
> endorsed by Winbond.
> >
>
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original sender
> of this email. If you are not the addressee indicated in this email or are
> not responsible for delivery of the email to such a person, please kindly
> reply to the sender indicating this fact and delete all copies of it from
> your computer and network server immediately. Your cooperation is highly
> appreciated. It is advised that any unauthorized use of confidential
> information of Winbond is strictly prohibited; and any information in this
> email irrelevant to the official business of Winbond shall be deemed as
> neither given nor endorsed by Winbond.
>
RE: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
called on non proxy.
Posted by Henry Hung <YT...@winbond.com>.
@Ted
Yes, I use the hadoop-hdfs-2.2.0.jar.
BTW, how do you certain that the namenode class is ClientNamenodeProtocolTranslatorPB?
>From the NameNodeProxies, I can only assume the ClientNamenodeProtocolTranslatorPB is used only when connecting to single hadoop namenode.
public static <T> ProxyAndInfo<T> createNonHAProxy(
Configuration conf, InetSocketAddress nnAddr, Class<T> xface,
UserGroupInformation ugi, boolean withRetries) throws IOException {
Text dtService = SecurityUtil.buildTokenService(nnAddr);
T proxy;
if (xface == ClientProtocol.class) {
proxy = (T) createNNProxyWithClientProtocol(nnAddr, conf, ugi,
withRetries);
But I'm using HA configuration using QJM, so the my guess is the createProxy will go to the HA case because I provide failoverProxyProviderClass with "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider".
public static <T> ProxyAndInfo<T> createProxy(Configuration conf,
URI nameNodeUri, Class<T> xface) throws IOException {
Class<FailoverProxyProvider<T>> failoverProxyProviderClass =
getFailoverProxyProviderClass(conf, nameNodeUri, xface);
if (failoverProxyProviderClass == null) {
// Non-HA case
return createNonHAProxy(conf, NameNode.getAddress(nameNodeUri), xface,
UserGroupInformation.getCurrentUser(), true);
} else {
// HA case
FailoverProxyProvider<T> failoverProxyProvider = NameNodeProxies
.createFailoverProxyProvider(conf, failoverProxyProviderClass, xface,
nameNodeUri);
Conf config = new Conf(conf);
T proxy = (T) RetryProxy.create(xface, failoverProxyProvider, RetryPolicies
.failoverOnNetworkException(RetryPolicies.TRY_ONCE_THEN_FAIL,
config.maxFailoverAttempts, config.failoverSleepBaseMillis,
config.failoverSleepMaxMillis));
Text dtService = HAUtil.buildTokenServiceForLogicalUri(nameNodeUri);
return new ProxyAndInfo<T>(proxy, dtService);
}
}
Here is the snippet of my hdfs-site.xml:
<property>
<name>dfs.nameservices</name>
<value>hadoopdev</value>
</property>
<property>
<name>dfs.ha.namenodes.hadoopdev</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.hadoopdev.nn1</name>
<value>fphd9.ctpilot1.com:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.hadoopdev.nn1</name>
<value>fphd9.ctpilot1.com:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.hadoopdev.nn2</name>
<value>fphd10.ctpilot1.com:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.hadoopdev.nn2</name>
<value>fphd10.ctpilot1.com:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://fphd8.ctpilot1.com:8485;fphd9.ctpilot1.com:8485;fphd10.ctpilot1.com:8485/hadoopdev</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.hadoopdev</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>shell(/bin/true)</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/data/hadoop/hadoop-data-2/journal</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>fphd1.ctpilot1.com:2222</value>
</property>
-----Original Message-----
From: Ted Yu [mailto:yuzhihong@gmail.com]
Sent: Tuesday, November 26, 2013 1:56 AM
To: user@hbase.apache.org
Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.
Here is the caller to createReorderingProxy():
ClientProtocol cp1 = createReorderingProxy(namenode, lrb, conf);
where namenode
is org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB :
public class ClientNamenodeProtocolTranslatorPB implements
ProtocolMetaInterface, ClientProtocol, Closeable, ProtocolTranslator {
In createReorderingProxy() :
new Class[]{ClientProtocol.class, Closeable.class},
We ask for Closeable interface.
Did the error persist after you replaced with the hadoop-hdfs-2.2.0.jar ?
Meaning, did you start HBase using the new hadoop jars ?
Cheers
On Mon, Nov 25, 2013 at 1:04 PM, Henry Hung <YT...@winbond.com> wrote:
> I looked into the source code of
> org/apache/hadoop/hbase/fs/HFileSystem.java
> and whenever I execute hbase-daemon.sh stop master (or regionserver),
> the
> method.getName() is "close",
> but the org/apache/hadoop/hdfs/protocol/ClientProtocol.java do not
> have method named "close", thus it result in error "object is not an
> instance of declaring class"
>
> Could someone that familiar with hbase-0.96.0 hadoop2 can tell me if
> this problem need to be fixed? And how to fix it?
>
> private static ClientProtocol createReorderingProxy(final
> ClientProtocol cp, final ReorderBlocks lrb, final Configuration conf) {
> return (ClientProtocol) Proxy.newProxyInstance
> (cp.getClass().getClassLoader(),
> new Class[]{ClientProtocol.class, Closeable.class},
> new InvocationHandler() {
> public Object invoke(Object proxy, Method method,
> Object[] args) throws Throwable {
> try {
> // method.invoke will failed if
> method.getName().equals("close")
> // because ClientProtocol do not have method "close"
> Object res = method.invoke(cp, args);
> if (res != null && args != null && args.length == 3
> && "getBlockLocations".equals(method.getName())
> && res instanceof LocatedBlocks
> && args[0] instanceof String
> && args[0] != null) {
> lrb.reorderBlocks(conf, (LocatedBlocks) res,
> (String) args[0]);
> }
> return res;
> } catch (InvocationTargetException ite) {
> // We will have this for all the exception, checked
> on not, sent
> // by any layer, including the functional exception
> Throwable cause = ite.getCause();
> if (cause == null){
> throw new RuntimeException(
> "Proxy invocation failed and getCause is null", ite);
> }
> if (cause instanceof UndeclaredThrowableException) {
> Throwable causeCause = cause.getCause();
> if (causeCause == null) {
> throw new
> RuntimeException("UndeclaredThrowableException had null cause!");
> }
> cause = cause.getCause();
> }
> throw cause;
> }
> }
> });
> }
>
>
>
> -----Original Message-----
> From: MA11 YTHung1
> Sent: Thursday, November 21, 2013 9:57 AM
> To: user@hbase.apache.org
> Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC:
> RPC.stopProxy called on non proxy.
>
> Additional information:
>
> I replace all files with prefix hadoop in hbase-0.96.0-hadoop2/lib
> with
> hadoop-2.2.0 libraries.
>
> the ls -l of hbase-0.96.0-hadoop2/lib as below:
>
> -rw-r--r-- 1 hadoop users 62983 Sep 17 16:13 activation-1.1.jar
> -rw-r--r-- 1 hadoop users 4467 Sep 17 23:29 aopalliance-1.0.jar
> -rw-r--r-- 1 hadoop users 43033 Sep 17 16:13 asm-3.1.jar
> -rw-r--r-- 1 hadoop users 263268 Sep 17 16:27 avro-1.5.3.jar
> -rw-r--r-- 1 hadoop users 188671 Sep 17 16:12 commons-beanutils-1.7.0.jar
> -rw-r--r-- 1 hadoop users 206035 Sep 17 16:13
> commons-beanutils-core-1.8.0.jar
> -rw-r--r-- 1 hadoop users 41123 Sep 17 16:12 commons-cli-1.2.jar
> -rw-r--r-- 1 hadoop users 259600 Sep 17 16:13 commons-codec-1.7.jar
> -rw-r--r-- 1 hadoop users 575389 Sep 17 16:12
> commons-collections-3.2.1.jar
> -rw-r--r-- 1 hadoop users 238681 Sep 17 16:27 commons-compress-1.4.jar
> -rw-r--r-- 1 hadoop users 298829 Sep 17 16:13
> commons-configuration-1.6.jar
> -rw-r--r-- 1 hadoop users 24239 Sep 17 23:28 commons-daemon-1.0.13.jar
> -rw-r--r-- 1 hadoop users 143602 Sep 17 16:12 commons-digester-1.8.jar
> -rw-r--r-- 1 hadoop users 112341 Sep 17 16:13 commons-el-1.0.jar
> -rw-r--r-- 1 hadoop users 305001 Sep 17 16:12 commons-httpclient-3.1.jar
> -rw-r--r-- 1 hadoop users 185140 Sep 17 16:13 commons-io-2.4.jar
> -rw-r--r-- 1 hadoop users 284220 Sep 17 16:12 commons-lang-2.6.jar
> -rw-r--r-- 1 hadoop users 60686 Sep 17 16:12 commons-logging-1.1.1.jar
> -rw-r--r-- 1 hadoop users 988514 Sep 17 16:13 commons-math-2.2.jar
> -rw-r--r-- 1 hadoop users 273370 Sep 17 16:27 commons-net-3.1.jar
> -rw-r--r-- 1 hadoop users 3566844 Sep 17 16:15 core-3.1.1.jar
> -rw-r--r-- 1 hadoop users 15322 Sep 17 16:12
> findbugs-annotations-1.3.9-1.jar
> -rw-r--r-- 1 hadoop users 21817 Sep 17 23:29
> gmbal-api-only-3.0.0-b023.jar
> -rw-r--r-- 1 hadoop users 684337 Sep 17 23:29 grizzly-framework-2.1.1.jar
> -rw-r--r-- 1 hadoop users 210846 Sep 17 23:29
> grizzly-framework-2.1.1-tests.jar
> -rw-r--r-- 1 hadoop users 248346 Sep 17 23:29 grizzly-http-2.1.1.jar
> -rw-r--r-- 1 hadoop users 193583 Sep 17 23:29
> grizzly-http-server-2.1.1.jar
> -rw-r--r-- 1 hadoop users 336878 Sep 17 23:29
> grizzly-http-servlet-2.1.1.jar
> -rw-r--r-- 1 hadoop users 8072 Sep 17 23:29 grizzly-rcm-2.1.1.jar
> -rw-r--r-- 1 hadoop users 1795932 Sep 17 16:13 guava-12.0.1.jar
> -rw-r--r-- 1 hadoop users 710492 Sep 17 23:29 guice-3.0.jar
> -rw-r--r-- 1 hadoop users 65012 Sep 17 23:29 guice-servlet-3.0.jar
> -rw-r--r-- 1 hadoop users 16778 Nov 20 17:39
> hadoop-annotations-2.2.0.jar
> -rw-r--r-- 1 hadoop users 49750 Nov 20 17:40 hadoop-auth-2.2.0.jar
> -rw-r--r-- 1 hadoop users 2576 Oct 12 06:20
> hadoop-client-2.1.0-beta.jar
> -rw-r--r-- 1 hadoop users 2735584 Nov 20 17:50
> hadoop-common-2.2.0.jar
> -rw-r--r-- 1 hadoop users 5242252 Nov 21 08:48 hadoop-hdfs-2.2.0.jar
> -rw-r--r-- 1 hadoop users 1988460 Nov 21 08:48 hadoop-hdfs-2.2.0-tests.jar
> -rw-r--r-- 1 hadoop users 482042 Nov 21 08:49
> hadoop-mapreduce-client-app-2.2.0.jar
> -rw-r--r-- 1 hadoop users 656365 Nov 21 08:49
> hadoop-mapreduce-client-common-2.2.0.jar
> -rw-r--r-- 1 hadoop users 1455001 Nov 21 08:50
> hadoop-mapreduce-client-core-2.2.0.jar
> -rw-r--r-- 1 hadoop users 35216 Nov 21 08:50
> hadoop-mapreduce-client-jobclient-2.2.0.jar
> -rw-r--r-- 1 hadoop users 1434852 Nov 21 08:50
> hadoop-mapreduce-client-jobclient-2.2.0-tests.jar
> -rw-r--r-- 1 hadoop users 21537 Nov 21 08:51
> hadoop-mapreduce-client-shuffle-2.2.0.jar
> -rw-r--r-- 1 hadoop users 1158936 Nov 21 08:51 hadoop-yarn-api-2.2.0.jar
> -rw-r--r-- 1 hadoop users 94728 Nov 21 08:51
> hadoop-yarn-client-2.2.0.jar
> -rw-r--r-- 1 hadoop users 1301627 Nov 21 08:51
> hadoop-yarn-common-2.2.0.jar
> -rw-r--r-- 1 hadoop users 175554 Nov 21 08:52
> hadoop-yarn-server-common-2.2.0.jar
> -rw-r--r-- 1 hadoop users 467638 Nov 21 08:52
> hadoop-yarn-server-nodemanager-2.2.0.jar
> -rw-r--r-- 1 hadoop users 825853 Oct 12 06:28
> hbase-client-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 354845 Oct 12 06:28
> hbase-common-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 132690 Oct 12 06:28
> hbase-common-0.96.0-hadoop2-tests.jar
> -rw-r--r-- 1 hadoop users 97428 Oct 12 06:28
> hbase-examples-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 72765 Oct 12 06:28
> hbase-hadoop2-compat-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 32096 Oct 12 06:28
> hbase-hadoop-compat-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 12174 Oct 12 06:28 hbase-it-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 288784 Oct 12 06:28
> hbase-it-0.96.0-hadoop2-tests.jar
> -rw-r--r-- 1 hadoop users 94784 Oct 12 06:28
> hbase-prefix-tree-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 3134214 Oct 12 06:28
> hbase-protocol-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 3058804 Oct 12 06:28
> hbase-server-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 3150292 Oct 12 06:28
> hbase-server-0.96.0-hadoop2-tests.jar
> -rw-r--r-- 1 hadoop users 12554 Oct 12 06:28
> hbase-shell-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 10941 Oct 12 06:28
> hbase-testing-util-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 2276333 Oct 12 06:28
> hbase-thrift-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 95975 Sep 17 16:15 high-scale-lib-1.1.1.jar
> -rw-r--r-- 1 hadoop users 31020 Sep 17 16:14 htrace-core-2.01.jar
> -rw-r--r-- 1 hadoop users 352585 Sep 17 16:15 httpclient-4.1.3.jar
> -rw-r--r-- 1 hadoop users 181201 Sep 17 16:15 httpcore-4.1.3.jar
> -rw-r--r-- 1 hadoop users 227517 Sep 17 16:13 jackson-core-asl-1.8.8.jar
> -rw-r--r-- 1 hadoop users 17884 Sep 17 16:13 jackson-jaxrs-1.8.8.jar
> -rw-r--r-- 1 hadoop users 669065 Sep 17 16:13
> jackson-mapper-asl-1.8.8.jar
> -rw-r--r-- 1 hadoop users 32353 Sep 17 16:13 jackson-xc-1.8.8.jar
> -rw-r--r-- 1 hadoop users 20642 Sep 17 16:15 jamon-runtime-2.3.1.jar
> -rw-r--r-- 1 hadoop users 408133 Sep 17 16:13 jasper-compiler-5.5.23.jar
> -rw-r--r-- 1 hadoop users 76844 Sep 17 16:13 jasper-runtime-5.5.23.jar
> -rw-r--r-- 1 hadoop users 2497 Sep 17 23:29 javax.inject-1.jar
> -rw-r--r-- 1 hadoop users 83586 Sep 17 23:29 javax.servlet-3.0.jar
> -rw-r--r-- 1 hadoop users 105134 Sep 17 16:27 jaxb-api-2.2.2.jar
> -rw-r--r-- 1 hadoop users 890168 Sep 17 16:13 jaxb-impl-2.2.3-1.jar
> -rw-r--r-- 1 hadoop users 129217 Sep 17 23:29 jersey-client-1.8.jar
> -rw-r--r-- 1 hadoop users 458233 Sep 17 16:13 jersey-core-1.8.jar
> -rw-r--r-- 1 hadoop users 17585 Sep 17 23:29 jersey-grizzly2-1.8.jar
> -rw-r--r-- 1 hadoop users 14712 Sep 17 23:29 jersey-guice-1.8.jar
> -rw-r--r-- 1 hadoop users 147933 Sep 17 16:13 jersey-json-1.8.jar
> -rw-r--r-- 1 hadoop users 694352 Sep 17 16:13 jersey-server-1.8.jar
> -rw-r--r-- 1 hadoop users 28034 Sep 17 23:29
> jersey-test-framework-core-1.8.jar
> -rw-r--r-- 1 hadoop users 12907 Sep 17 23:29
> jersey-test-framework-grizzly2-1.8.jar
> -rw-r--r-- 1 hadoop users 321806 Sep 17 16:27 jets3t-0.6.1.jar
> -rw-r--r-- 1 hadoop users 75963 Sep 17 16:13 jettison-1.3.1.jar
> -rw-r--r-- 1 hadoop users 539912 Sep 17 16:13 jetty-6.1.26.jar
> -rw-r--r-- 1 hadoop users 18891 Sep 17 16:15 jetty-sslengine-6.1.26.jar
> -rw-r--r-- 1 hadoop users 177131 Sep 17 16:13 jetty-util-6.1.26.jar
> -rw-r--r-- 1 hadoop users 13832273 Sep 17 16:15 jruby-complete-1.6.8.jar
> -rw-r--r-- 1 hadoop users 185746 Sep 17 16:27 jsch-0.1.42.jar
> -rw-r--r-- 1 hadoop users 1024680 Sep 17 16:13 jsp-2.1-6.1.14.jar
> -rw-r--r-- 1 hadoop users 134910 Sep 17 16:13 jsp-api-2.1-6.1.14.jar
> -rw-r--r-- 1 hadoop users 100636 Sep 17 16:27 jsp-api-2.1.jar
> -rw-r--r-- 1 hadoop users 33015 Sep 17 16:13 jsr305-1.3.9.jar
> -rw-r--r-- 1 hadoop users 245039 Sep 17 16:12 junit-4.11.jar
> -rw-r--r-- 1 hadoop users 347531 Sep 17 16:15 libthrift-0.9.0.jar
> -rw-r--r-- 1 hadoop users 489884 Sep 17 16:12 log4j-1.2.17.jar
> -rw-r--r-- 1 hadoop users 42212 Sep 17 23:29
> management-api-3.0.0-b012.jar
> -rw-r--r-- 1 hadoop users 82445 Sep 17 16:14 metrics-core-2.1.2.jar
> drwxr-xr-x 3 hadoop users 4096 Nov 21 09:10 native
> -rw-r--r-- 1 hadoop users 1206119 Sep 18 04:00 netty-3.6.6.Final.jar
> -rw-r--r-- 1 hadoop users 29555 Sep 17 16:27 paranamer-2.3.jar
> -rw-r--r-- 1 hadoop users 533455 Sep 17 16:13 protobuf-java-2.5.0.jar
> drwxr-xr-x 5 hadoop users 4096 Sep 28 10:37 ruby
> -rw-r--r-- 1 hadoop users 132368 Sep 17 16:13 servlet-api-2.5-6.1.14.jar
> -rw-r--r-- 1 hadoop users 105112 Sep 17 16:12 servlet-api-2.5.jar
> -rw-r--r-- 1 hadoop users 25962 Sep 17 16:14 slf4j-api-1.6.4.jar
> -rw-r--r-- 1 hadoop users 9748 Oct 3 07:15 slf4j-log4j12-1.6.4.jar
> -rw-r--r-- 1 hadoop users 995720 Sep 17 16:27 snappy-java-1.0.3.2.jar
> -rw-r--r-- 1 hadoop users 26514 Sep 17 16:13 stax-api-1.0.1.jar
> -rw-r--r-- 1 hadoop users 15010 Sep 17 16:13 xmlenc-0.52.jar
> -rw-r--r-- 1 hadoop users 94672 Sep 17 16:27 xz-1.0.jar
> -rw-r--r-- 1 hadoop users 779974 Sep 17 16:14 zookeeper-3.4.5.jar
>
> Best regards,
> Henry
>
> -----Original Message-----
> From: MA11 YTHung1
> Sent: Thursday, November 21, 2013 9:51 AM
> To: user@hbase.apache.org
> Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC:
> RPC.stopProxy called on non proxy.
>
> I'm using hadoop-2.2.0 stable
>
> -----Original Message-----
> From: Jimmy Xiang [mailto:jxiang@cloudera.com]
> Sent: Thursday, November 21, 2013 9:49 AM
> To: user
> Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC:
> RPC.stopProxy called on non proxy.
>
> Which version of Hadoop do you use?
>
>
> On Wed, Nov 20, 2013 at 5:43 PM, Henry Hung <YT...@winbond.com> wrote:
>
> > Hi All,
> >
> > When stopping master or regionserver, I found some ERROR and WARN in
> > the log files, are these errors can cause problem in hbase:
> >
> > 13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
> > 13/11/21 09:35:36 ERROR ipc.RPC: RPC.stopProxy called on non proxy.
> > java.lang.IllegalArgumentException: object is not an instance of
> > declaring class
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> ava:39)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess
> orImpl.java:25)
> > at java.lang.reflect.Method.invoke(Method.java:597)
> > at
> > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
> > at $Proxy18.close(Unknown Source)
> > at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:621)
> > at
> >
> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.j
> ava:738)
> > at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> > at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSyst
> em.java:847)
> > at
> > org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> > at
> >
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.j
> ava:2541)
> > at
> > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager
> > .j
> > ava:54)
> > 13/11/21 09:35:36 WARN util.ShutdownHookManager: ShutdownHook
> > 'ClientFinalizer' failed,
> org.apache.hadoop.HadoopIllegalArgumentException:
> > Cannot close proxy - is not Closeable or does not provide closeable
> > invocation handler class $Proxy18
> > org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy
> > - is not Closeable or does not provide closeable invocation handler
> > class
> > $Proxy18
> > at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:639)
> > at
> >
> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.j
> ava:738)
> > at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> > at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSyst
> em.java:847)
> > at
> > org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> > at
> >
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.j
> ava:2541)
> > at
> > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager
> > .j
> > ava:54)
> >
> > Best regards,
> > Henry
> >
> > ________________________________
> > The privileged confidential information contained in this email is
> > intended for use only by the addressees as indicated by the original
> > sender of this email. If you are not the addressee indicated in this
> > email or are not responsible for delivery of the email to such a
> > person, please kindly reply to the sender indicating this fact and
> > delete all copies of it from your computer and network server
> > immediately. Your cooperation is highly appreciated. It is advised
> > that any unauthorized use of confidential information of Winbond is
> > strictly prohibited; and any information in this email irrelevant to
> > the official business of Winbond shall be deemed as neither given
> > nor
> endorsed by Winbond.
> >
>
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original
> sender of this email. If you are not the addressee indicated in this
> email or are not responsible for delivery of the email to such a
> person, please kindly reply to the sender indicating this fact and
> delete all copies of it from your computer and network server
> immediately. Your cooperation is highly appreciated. It is advised
> that any unauthorized use of confidential information of Winbond is
> strictly prohibited; and any information in this email irrelevant to
> the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
>
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original
> sender of this email. If you are not the addressee indicated in this
> email or are not responsible for delivery of the email to such a
> person, please kindly reply to the sender indicating this fact and
> delete all copies of it from your computer and network server
> immediately. Your cooperation is highly appreciated. It is advised
> that any unauthorized use of confidential information of Winbond is
> strictly prohibited; and any information in this email irrelevant to
> the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
>
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original
> sender of this email. If you are not the addressee indicated in this
> email or are not responsible for delivery of the email to such a
> person, please kindly reply to the sender indicating this fact and
> delete all copies of it from your computer and network server
> immediately. Your cooperation is highly appreciated. It is advised
> that any unauthorized use of confidential information of Winbond is
> strictly prohibited; and any information in this email irrelevant to
> the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
>
The privileged confidential information contained in this email is intended for use only by the addressees as indicated by the original sender of this email. If you are not the addressee indicated in this email or are not responsible for delivery of the email to such a person, please kindly reply to the sender indicating this fact and delete all copies of it from your computer and network server immediately. Your cooperation is highly appreciated. It is advised that any unauthorized use of confidential information of Winbond is strictly prohibited; and any information in this email irrelevant to the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
called on non proxy.
Posted by Ted Yu <yu...@gmail.com>.
Here is the caller to createReorderingProxy():
ClientProtocol cp1 = createReorderingProxy(namenode, lrb, conf);
where namenode
is org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB :
public class ClientNamenodeProtocolTranslatorPB implements
ProtocolMetaInterface, ClientProtocol, Closeable, ProtocolTranslator {
In createReorderingProxy() :
new Class[]{ClientProtocol.class, Closeable.class},
We ask for Closeable interface.
Did the error persist after you replaced with the hadoop-hdfs-2.2.0.jar ?
Meaning, did you start HBase using the new hadoop jars ?
Cheers
On Mon, Nov 25, 2013 at 1:04 PM, Henry Hung <YT...@winbond.com> wrote:
> I looked into the source code of
> org/apache/hadoop/hbase/fs/HFileSystem.java
> and whenever I execute hbase-daemon.sh stop master (or regionserver), the
> method.getName() is "close",
> but the org/apache/hadoop/hdfs/protocol/ClientProtocol.java do not have
> method named "close",
> thus it result in error "object is not an instance of declaring class"
>
> Could someone that familiar with hbase-0.96.0 hadoop2 can tell me if this
> problem need to be fixed? And how to fix it?
>
> private static ClientProtocol createReorderingProxy(final ClientProtocol
> cp, final ReorderBlocks lrb, final Configuration conf) {
> return (ClientProtocol) Proxy.newProxyInstance
> (cp.getClass().getClassLoader(),
> new Class[]{ClientProtocol.class, Closeable.class},
> new InvocationHandler() {
> public Object invoke(Object proxy, Method method,
> Object[] args) throws Throwable {
> try {
> // method.invoke will failed if
> method.getName().equals("close")
> // because ClientProtocol do not have method "close"
> Object res = method.invoke(cp, args);
> if (res != null && args != null && args.length == 3
> && "getBlockLocations".equals(method.getName())
> && res instanceof LocatedBlocks
> && args[0] instanceof String
> && args[0] != null) {
> lrb.reorderBlocks(conf, (LocatedBlocks) res, (String)
> args[0]);
> }
> return res;
> } catch (InvocationTargetException ite) {
> // We will have this for all the exception, checked on
> not, sent
> // by any layer, including the functional exception
> Throwable cause = ite.getCause();
> if (cause == null){
> throw new RuntimeException(
> "Proxy invocation failed and getCause is null", ite);
> }
> if (cause instanceof UndeclaredThrowableException) {
> Throwable causeCause = cause.getCause();
> if (causeCause == null) {
> throw new
> RuntimeException("UndeclaredThrowableException had null cause!");
> }
> cause = cause.getCause();
> }
> throw cause;
> }
> }
> });
> }
>
>
>
> -----Original Message-----
> From: MA11 YTHung1
> Sent: Thursday, November 21, 2013 9:57 AM
> To: user@hbase.apache.org
> Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
> called on non proxy.
>
> Additional information:
>
> I replace all files with prefix hadoop in hbase-0.96.0-hadoop2/lib with
> hadoop-2.2.0 libraries.
>
> the ls -l of hbase-0.96.0-hadoop2/lib as below:
>
> -rw-r--r-- 1 hadoop users 62983 Sep 17 16:13 activation-1.1.jar
> -rw-r--r-- 1 hadoop users 4467 Sep 17 23:29 aopalliance-1.0.jar
> -rw-r--r-- 1 hadoop users 43033 Sep 17 16:13 asm-3.1.jar
> -rw-r--r-- 1 hadoop users 263268 Sep 17 16:27 avro-1.5.3.jar
> -rw-r--r-- 1 hadoop users 188671 Sep 17 16:12 commons-beanutils-1.7.0.jar
> -rw-r--r-- 1 hadoop users 206035 Sep 17 16:13
> commons-beanutils-core-1.8.0.jar
> -rw-r--r-- 1 hadoop users 41123 Sep 17 16:12 commons-cli-1.2.jar
> -rw-r--r-- 1 hadoop users 259600 Sep 17 16:13 commons-codec-1.7.jar
> -rw-r--r-- 1 hadoop users 575389 Sep 17 16:12
> commons-collections-3.2.1.jar
> -rw-r--r-- 1 hadoop users 238681 Sep 17 16:27 commons-compress-1.4.jar
> -rw-r--r-- 1 hadoop users 298829 Sep 17 16:13
> commons-configuration-1.6.jar
> -rw-r--r-- 1 hadoop users 24239 Sep 17 23:28 commons-daemon-1.0.13.jar
> -rw-r--r-- 1 hadoop users 143602 Sep 17 16:12 commons-digester-1.8.jar
> -rw-r--r-- 1 hadoop users 112341 Sep 17 16:13 commons-el-1.0.jar
> -rw-r--r-- 1 hadoop users 305001 Sep 17 16:12 commons-httpclient-3.1.jar
> -rw-r--r-- 1 hadoop users 185140 Sep 17 16:13 commons-io-2.4.jar
> -rw-r--r-- 1 hadoop users 284220 Sep 17 16:12 commons-lang-2.6.jar
> -rw-r--r-- 1 hadoop users 60686 Sep 17 16:12 commons-logging-1.1.1.jar
> -rw-r--r-- 1 hadoop users 988514 Sep 17 16:13 commons-math-2.2.jar
> -rw-r--r-- 1 hadoop users 273370 Sep 17 16:27 commons-net-3.1.jar
> -rw-r--r-- 1 hadoop users 3566844 Sep 17 16:15 core-3.1.1.jar
> -rw-r--r-- 1 hadoop users 15322 Sep 17 16:12
> findbugs-annotations-1.3.9-1.jar
> -rw-r--r-- 1 hadoop users 21817 Sep 17 23:29
> gmbal-api-only-3.0.0-b023.jar
> -rw-r--r-- 1 hadoop users 684337 Sep 17 23:29 grizzly-framework-2.1.1.jar
> -rw-r--r-- 1 hadoop users 210846 Sep 17 23:29
> grizzly-framework-2.1.1-tests.jar
> -rw-r--r-- 1 hadoop users 248346 Sep 17 23:29 grizzly-http-2.1.1.jar
> -rw-r--r-- 1 hadoop users 193583 Sep 17 23:29
> grizzly-http-server-2.1.1.jar
> -rw-r--r-- 1 hadoop users 336878 Sep 17 23:29
> grizzly-http-servlet-2.1.1.jar
> -rw-r--r-- 1 hadoop users 8072 Sep 17 23:29 grizzly-rcm-2.1.1.jar
> -rw-r--r-- 1 hadoop users 1795932 Sep 17 16:13 guava-12.0.1.jar
> -rw-r--r-- 1 hadoop users 710492 Sep 17 23:29 guice-3.0.jar
> -rw-r--r-- 1 hadoop users 65012 Sep 17 23:29 guice-servlet-3.0.jar
> -rw-r--r-- 1 hadoop users 16778 Nov 20 17:39
> hadoop-annotations-2.2.0.jar
> -rw-r--r-- 1 hadoop users 49750 Nov 20 17:40 hadoop-auth-2.2.0.jar
> -rw-r--r-- 1 hadoop users 2576 Oct 12 06:20
> hadoop-client-2.1.0-beta.jar
> -rw-r--r-- 1 hadoop users 2735584 Nov 20 17:50 hadoop-common-2.2.0.jar
> -rw-r--r-- 1 hadoop users 5242252 Nov 21 08:48 hadoop-hdfs-2.2.0.jar
> -rw-r--r-- 1 hadoop users 1988460 Nov 21 08:48 hadoop-hdfs-2.2.0-tests.jar
> -rw-r--r-- 1 hadoop users 482042 Nov 21 08:49
> hadoop-mapreduce-client-app-2.2.0.jar
> -rw-r--r-- 1 hadoop users 656365 Nov 21 08:49
> hadoop-mapreduce-client-common-2.2.0.jar
> -rw-r--r-- 1 hadoop users 1455001 Nov 21 08:50
> hadoop-mapreduce-client-core-2.2.0.jar
> -rw-r--r-- 1 hadoop users 35216 Nov 21 08:50
> hadoop-mapreduce-client-jobclient-2.2.0.jar
> -rw-r--r-- 1 hadoop users 1434852 Nov 21 08:50
> hadoop-mapreduce-client-jobclient-2.2.0-tests.jar
> -rw-r--r-- 1 hadoop users 21537 Nov 21 08:51
> hadoop-mapreduce-client-shuffle-2.2.0.jar
> -rw-r--r-- 1 hadoop users 1158936 Nov 21 08:51 hadoop-yarn-api-2.2.0.jar
> -rw-r--r-- 1 hadoop users 94728 Nov 21 08:51
> hadoop-yarn-client-2.2.0.jar
> -rw-r--r-- 1 hadoop users 1301627 Nov 21 08:51
> hadoop-yarn-common-2.2.0.jar
> -rw-r--r-- 1 hadoop users 175554 Nov 21 08:52
> hadoop-yarn-server-common-2.2.0.jar
> -rw-r--r-- 1 hadoop users 467638 Nov 21 08:52
> hadoop-yarn-server-nodemanager-2.2.0.jar
> -rw-r--r-- 1 hadoop users 825853 Oct 12 06:28
> hbase-client-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 354845 Oct 12 06:28
> hbase-common-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 132690 Oct 12 06:28
> hbase-common-0.96.0-hadoop2-tests.jar
> -rw-r--r-- 1 hadoop users 97428 Oct 12 06:28
> hbase-examples-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 72765 Oct 12 06:28
> hbase-hadoop2-compat-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 32096 Oct 12 06:28
> hbase-hadoop-compat-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 12174 Oct 12 06:28 hbase-it-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 288784 Oct 12 06:28
> hbase-it-0.96.0-hadoop2-tests.jar
> -rw-r--r-- 1 hadoop users 94784 Oct 12 06:28
> hbase-prefix-tree-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 3134214 Oct 12 06:28
> hbase-protocol-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 3058804 Oct 12 06:28
> hbase-server-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 3150292 Oct 12 06:28
> hbase-server-0.96.0-hadoop2-tests.jar
> -rw-r--r-- 1 hadoop users 12554 Oct 12 06:28
> hbase-shell-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 10941 Oct 12 06:28
> hbase-testing-util-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 2276333 Oct 12 06:28
> hbase-thrift-0.96.0-hadoop2.jar
> -rw-r--r-- 1 hadoop users 95975 Sep 17 16:15 high-scale-lib-1.1.1.jar
> -rw-r--r-- 1 hadoop users 31020 Sep 17 16:14 htrace-core-2.01.jar
> -rw-r--r-- 1 hadoop users 352585 Sep 17 16:15 httpclient-4.1.3.jar
> -rw-r--r-- 1 hadoop users 181201 Sep 17 16:15 httpcore-4.1.3.jar
> -rw-r--r-- 1 hadoop users 227517 Sep 17 16:13 jackson-core-asl-1.8.8.jar
> -rw-r--r-- 1 hadoop users 17884 Sep 17 16:13 jackson-jaxrs-1.8.8.jar
> -rw-r--r-- 1 hadoop users 669065 Sep 17 16:13
> jackson-mapper-asl-1.8.8.jar
> -rw-r--r-- 1 hadoop users 32353 Sep 17 16:13 jackson-xc-1.8.8.jar
> -rw-r--r-- 1 hadoop users 20642 Sep 17 16:15 jamon-runtime-2.3.1.jar
> -rw-r--r-- 1 hadoop users 408133 Sep 17 16:13 jasper-compiler-5.5.23.jar
> -rw-r--r-- 1 hadoop users 76844 Sep 17 16:13 jasper-runtime-5.5.23.jar
> -rw-r--r-- 1 hadoop users 2497 Sep 17 23:29 javax.inject-1.jar
> -rw-r--r-- 1 hadoop users 83586 Sep 17 23:29 javax.servlet-3.0.jar
> -rw-r--r-- 1 hadoop users 105134 Sep 17 16:27 jaxb-api-2.2.2.jar
> -rw-r--r-- 1 hadoop users 890168 Sep 17 16:13 jaxb-impl-2.2.3-1.jar
> -rw-r--r-- 1 hadoop users 129217 Sep 17 23:29 jersey-client-1.8.jar
> -rw-r--r-- 1 hadoop users 458233 Sep 17 16:13 jersey-core-1.8.jar
> -rw-r--r-- 1 hadoop users 17585 Sep 17 23:29 jersey-grizzly2-1.8.jar
> -rw-r--r-- 1 hadoop users 14712 Sep 17 23:29 jersey-guice-1.8.jar
> -rw-r--r-- 1 hadoop users 147933 Sep 17 16:13 jersey-json-1.8.jar
> -rw-r--r-- 1 hadoop users 694352 Sep 17 16:13 jersey-server-1.8.jar
> -rw-r--r-- 1 hadoop users 28034 Sep 17 23:29
> jersey-test-framework-core-1.8.jar
> -rw-r--r-- 1 hadoop users 12907 Sep 17 23:29
> jersey-test-framework-grizzly2-1.8.jar
> -rw-r--r-- 1 hadoop users 321806 Sep 17 16:27 jets3t-0.6.1.jar
> -rw-r--r-- 1 hadoop users 75963 Sep 17 16:13 jettison-1.3.1.jar
> -rw-r--r-- 1 hadoop users 539912 Sep 17 16:13 jetty-6.1.26.jar
> -rw-r--r-- 1 hadoop users 18891 Sep 17 16:15 jetty-sslengine-6.1.26.jar
> -rw-r--r-- 1 hadoop users 177131 Sep 17 16:13 jetty-util-6.1.26.jar
> -rw-r--r-- 1 hadoop users 13832273 Sep 17 16:15 jruby-complete-1.6.8.jar
> -rw-r--r-- 1 hadoop users 185746 Sep 17 16:27 jsch-0.1.42.jar
> -rw-r--r-- 1 hadoop users 1024680 Sep 17 16:13 jsp-2.1-6.1.14.jar
> -rw-r--r-- 1 hadoop users 134910 Sep 17 16:13 jsp-api-2.1-6.1.14.jar
> -rw-r--r-- 1 hadoop users 100636 Sep 17 16:27 jsp-api-2.1.jar
> -rw-r--r-- 1 hadoop users 33015 Sep 17 16:13 jsr305-1.3.9.jar
> -rw-r--r-- 1 hadoop users 245039 Sep 17 16:12 junit-4.11.jar
> -rw-r--r-- 1 hadoop users 347531 Sep 17 16:15 libthrift-0.9.0.jar
> -rw-r--r-- 1 hadoop users 489884 Sep 17 16:12 log4j-1.2.17.jar
> -rw-r--r-- 1 hadoop users 42212 Sep 17 23:29
> management-api-3.0.0-b012.jar
> -rw-r--r-- 1 hadoop users 82445 Sep 17 16:14 metrics-core-2.1.2.jar
> drwxr-xr-x 3 hadoop users 4096 Nov 21 09:10 native
> -rw-r--r-- 1 hadoop users 1206119 Sep 18 04:00 netty-3.6.6.Final.jar
> -rw-r--r-- 1 hadoop users 29555 Sep 17 16:27 paranamer-2.3.jar
> -rw-r--r-- 1 hadoop users 533455 Sep 17 16:13 protobuf-java-2.5.0.jar
> drwxr-xr-x 5 hadoop users 4096 Sep 28 10:37 ruby
> -rw-r--r-- 1 hadoop users 132368 Sep 17 16:13 servlet-api-2.5-6.1.14.jar
> -rw-r--r-- 1 hadoop users 105112 Sep 17 16:12 servlet-api-2.5.jar
> -rw-r--r-- 1 hadoop users 25962 Sep 17 16:14 slf4j-api-1.6.4.jar
> -rw-r--r-- 1 hadoop users 9748 Oct 3 07:15 slf4j-log4j12-1.6.4.jar
> -rw-r--r-- 1 hadoop users 995720 Sep 17 16:27 snappy-java-1.0.3.2.jar
> -rw-r--r-- 1 hadoop users 26514 Sep 17 16:13 stax-api-1.0.1.jar
> -rw-r--r-- 1 hadoop users 15010 Sep 17 16:13 xmlenc-0.52.jar
> -rw-r--r-- 1 hadoop users 94672 Sep 17 16:27 xz-1.0.jar
> -rw-r--r-- 1 hadoop users 779974 Sep 17 16:14 zookeeper-3.4.5.jar
>
> Best regards,
> Henry
>
> -----Original Message-----
> From: MA11 YTHung1
> Sent: Thursday, November 21, 2013 9:51 AM
> To: user@hbase.apache.org
> Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
> called on non proxy.
>
> I'm using hadoop-2.2.0 stable
>
> -----Original Message-----
> From: Jimmy Xiang [mailto:jxiang@cloudera.com]
> Sent: Thursday, November 21, 2013 9:49 AM
> To: user
> Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
> called on non proxy.
>
> Which version of Hadoop do you use?
>
>
> On Wed, Nov 20, 2013 at 5:43 PM, Henry Hung <YT...@winbond.com> wrote:
>
> > Hi All,
> >
> > When stopping master or regionserver, I found some ERROR and WARN in
> > the log files, are these errors can cause problem in hbase:
> >
> > 13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
> > 13/11/21 09:35:36 ERROR ipc.RPC: RPC.stopProxy called on non proxy.
> > java.lang.IllegalArgumentException: object is not an instance of
> > declaring class
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > at java.lang.reflect.Method.invoke(Method.java:597)
> > at
> > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
> > at $Proxy18.close(Unknown Source)
> > at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:621)
> > at
> >
> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
> > at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> > at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
> > at
> > org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> > at
> >
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
> > at
> > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.j
> > ava:54)
> > 13/11/21 09:35:36 WARN util.ShutdownHookManager: ShutdownHook
> > 'ClientFinalizer' failed,
> org.apache.hadoop.HadoopIllegalArgumentException:
> > Cannot close proxy - is not Closeable or does not provide closeable
> > invocation handler class $Proxy18
> > org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy -
> > is not Closeable or does not provide closeable invocation handler
> > class
> > $Proxy18
> > at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:639)
> > at
> >
> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
> > at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> > at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
> > at
> > org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> > at
> >
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
> > at
> > org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.j
> > ava:54)
> >
> > Best regards,
> > Henry
> >
> > ________________________________
> > The privileged confidential information contained in this email is
> > intended for use only by the addressees as indicated by the original
> > sender of this email. If you are not the addressee indicated in this
> > email or are not responsible for delivery of the email to such a
> > person, please kindly reply to the sender indicating this fact and
> > delete all copies of it from your computer and network server
> > immediately. Your cooperation is highly appreciated. It is advised
> > that any unauthorized use of confidential information of Winbond is
> > strictly prohibited; and any information in this email irrelevant to
> > the official business of Winbond shall be deemed as neither given nor
> endorsed by Winbond.
> >
>
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original sender
> of this email. If you are not the addressee indicated in this email or are
> not responsible for delivery of the email to such a person, please kindly
> reply to the sender indicating this fact and delete all copies of it from
> your computer and network server immediately. Your cooperation is highly
> appreciated. It is advised that any unauthorized use of confidential
> information of Winbond is strictly prohibited; and any information in this
> email irrelevant to the official business of Winbond shall be deemed as
> neither given nor endorsed by Winbond.
>
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original sender
> of this email. If you are not the addressee indicated in this email or are
> not responsible for delivery of the email to such a person, please kindly
> reply to the sender indicating this fact and delete all copies of it from
> your computer and network server immediately. Your cooperation is highly
> appreciated. It is advised that any unauthorized use of confidential
> information of Winbond is strictly prohibited; and any information in this
> email irrelevant to the official business of Winbond shall be deemed as
> neither given nor endorsed by Winbond.
>
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original sender
> of this email. If you are not the addressee indicated in this email or are
> not responsible for delivery of the email to such a person, please kindly
> reply to the sender indicating this fact and delete all copies of it from
> your computer and network server immediately. Your cooperation is highly
> appreciated. It is advised that any unauthorized use of confidential
> information of Winbond is strictly prohibited; and any information in this
> email irrelevant to the official business of Winbond shall be deemed as
> neither given nor endorsed by Winbond.
>
RE: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
called on non proxy.
Posted by Henry Hung <YT...@winbond.com>.
I looked into the source code of org/apache/hadoop/hbase/fs/HFileSystem.java
and whenever I execute hbase-daemon.sh stop master (or regionserver), the method.getName() is "close",
but the org/apache/hadoop/hdfs/protocol/ClientProtocol.java do not have method named "close",
thus it result in error "object is not an instance of declaring class"
Could someone that familiar with hbase-0.96.0 hadoop2 can tell me if this problem need to be fixed? And how to fix it?
private static ClientProtocol createReorderingProxy(final ClientProtocol cp, final ReorderBlocks lrb, final Configuration conf) {
return (ClientProtocol) Proxy.newProxyInstance
(cp.getClass().getClassLoader(),
new Class[]{ClientProtocol.class, Closeable.class},
new InvocationHandler() {
public Object invoke(Object proxy, Method method,
Object[] args) throws Throwable {
try {
// method.invoke will failed if method.getName().equals("close")
// because ClientProtocol do not have method "close"
Object res = method.invoke(cp, args);
if (res != null && args != null && args.length == 3
&& "getBlockLocations".equals(method.getName())
&& res instanceof LocatedBlocks
&& args[0] instanceof String
&& args[0] != null) {
lrb.reorderBlocks(conf, (LocatedBlocks) res, (String) args[0]);
}
return res;
} catch (InvocationTargetException ite) {
// We will have this for all the exception, checked on not, sent
// by any layer, including the functional exception
Throwable cause = ite.getCause();
if (cause == null){
throw new RuntimeException(
"Proxy invocation failed and getCause is null", ite);
}
if (cause instanceof UndeclaredThrowableException) {
Throwable causeCause = cause.getCause();
if (causeCause == null) {
throw new RuntimeException("UndeclaredThrowableException had null cause!");
}
cause = cause.getCause();
}
throw cause;
}
}
});
}
-----Original Message-----
From: MA11 YTHung1
Sent: Thursday, November 21, 2013 9:57 AM
To: user@hbase.apache.org
Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.
Additional information:
I replace all files with prefix hadoop in hbase-0.96.0-hadoop2/lib with hadoop-2.2.0 libraries.
the ls -l of hbase-0.96.0-hadoop2/lib as below:
-rw-r--r-- 1 hadoop users 62983 Sep 17 16:13 activation-1.1.jar
-rw-r--r-- 1 hadoop users 4467 Sep 17 23:29 aopalliance-1.0.jar
-rw-r--r-- 1 hadoop users 43033 Sep 17 16:13 asm-3.1.jar
-rw-r--r-- 1 hadoop users 263268 Sep 17 16:27 avro-1.5.3.jar
-rw-r--r-- 1 hadoop users 188671 Sep 17 16:12 commons-beanutils-1.7.0.jar
-rw-r--r-- 1 hadoop users 206035 Sep 17 16:13 commons-beanutils-core-1.8.0.jar
-rw-r--r-- 1 hadoop users 41123 Sep 17 16:12 commons-cli-1.2.jar
-rw-r--r-- 1 hadoop users 259600 Sep 17 16:13 commons-codec-1.7.jar
-rw-r--r-- 1 hadoop users 575389 Sep 17 16:12 commons-collections-3.2.1.jar
-rw-r--r-- 1 hadoop users 238681 Sep 17 16:27 commons-compress-1.4.jar
-rw-r--r-- 1 hadoop users 298829 Sep 17 16:13 commons-configuration-1.6.jar
-rw-r--r-- 1 hadoop users 24239 Sep 17 23:28 commons-daemon-1.0.13.jar
-rw-r--r-- 1 hadoop users 143602 Sep 17 16:12 commons-digester-1.8.jar
-rw-r--r-- 1 hadoop users 112341 Sep 17 16:13 commons-el-1.0.jar
-rw-r--r-- 1 hadoop users 305001 Sep 17 16:12 commons-httpclient-3.1.jar
-rw-r--r-- 1 hadoop users 185140 Sep 17 16:13 commons-io-2.4.jar
-rw-r--r-- 1 hadoop users 284220 Sep 17 16:12 commons-lang-2.6.jar
-rw-r--r-- 1 hadoop users 60686 Sep 17 16:12 commons-logging-1.1.1.jar
-rw-r--r-- 1 hadoop users 988514 Sep 17 16:13 commons-math-2.2.jar
-rw-r--r-- 1 hadoop users 273370 Sep 17 16:27 commons-net-3.1.jar
-rw-r--r-- 1 hadoop users 3566844 Sep 17 16:15 core-3.1.1.jar
-rw-r--r-- 1 hadoop users 15322 Sep 17 16:12 findbugs-annotations-1.3.9-1.jar
-rw-r--r-- 1 hadoop users 21817 Sep 17 23:29 gmbal-api-only-3.0.0-b023.jar
-rw-r--r-- 1 hadoop users 684337 Sep 17 23:29 grizzly-framework-2.1.1.jar
-rw-r--r-- 1 hadoop users 210846 Sep 17 23:29 grizzly-framework-2.1.1-tests.jar
-rw-r--r-- 1 hadoop users 248346 Sep 17 23:29 grizzly-http-2.1.1.jar
-rw-r--r-- 1 hadoop users 193583 Sep 17 23:29 grizzly-http-server-2.1.1.jar
-rw-r--r-- 1 hadoop users 336878 Sep 17 23:29 grizzly-http-servlet-2.1.1.jar
-rw-r--r-- 1 hadoop users 8072 Sep 17 23:29 grizzly-rcm-2.1.1.jar
-rw-r--r-- 1 hadoop users 1795932 Sep 17 16:13 guava-12.0.1.jar
-rw-r--r-- 1 hadoop users 710492 Sep 17 23:29 guice-3.0.jar
-rw-r--r-- 1 hadoop users 65012 Sep 17 23:29 guice-servlet-3.0.jar
-rw-r--r-- 1 hadoop users 16778 Nov 20 17:39 hadoop-annotations-2.2.0.jar
-rw-r--r-- 1 hadoop users 49750 Nov 20 17:40 hadoop-auth-2.2.0.jar
-rw-r--r-- 1 hadoop users 2576 Oct 12 06:20 hadoop-client-2.1.0-beta.jar
-rw-r--r-- 1 hadoop users 2735584 Nov 20 17:50 hadoop-common-2.2.0.jar
-rw-r--r-- 1 hadoop users 5242252 Nov 21 08:48 hadoop-hdfs-2.2.0.jar
-rw-r--r-- 1 hadoop users 1988460 Nov 21 08:48 hadoop-hdfs-2.2.0-tests.jar
-rw-r--r-- 1 hadoop users 482042 Nov 21 08:49 hadoop-mapreduce-client-app-2.2.0.jar
-rw-r--r-- 1 hadoop users 656365 Nov 21 08:49 hadoop-mapreduce-client-common-2.2.0.jar
-rw-r--r-- 1 hadoop users 1455001 Nov 21 08:50 hadoop-mapreduce-client-core-2.2.0.jar
-rw-r--r-- 1 hadoop users 35216 Nov 21 08:50 hadoop-mapreduce-client-jobclient-2.2.0.jar
-rw-r--r-- 1 hadoop users 1434852 Nov 21 08:50 hadoop-mapreduce-client-jobclient-2.2.0-tests.jar
-rw-r--r-- 1 hadoop users 21537 Nov 21 08:51 hadoop-mapreduce-client-shuffle-2.2.0.jar
-rw-r--r-- 1 hadoop users 1158936 Nov 21 08:51 hadoop-yarn-api-2.2.0.jar
-rw-r--r-- 1 hadoop users 94728 Nov 21 08:51 hadoop-yarn-client-2.2.0.jar
-rw-r--r-- 1 hadoop users 1301627 Nov 21 08:51 hadoop-yarn-common-2.2.0.jar
-rw-r--r-- 1 hadoop users 175554 Nov 21 08:52 hadoop-yarn-server-common-2.2.0.jar
-rw-r--r-- 1 hadoop users 467638 Nov 21 08:52 hadoop-yarn-server-nodemanager-2.2.0.jar
-rw-r--r-- 1 hadoop users 825853 Oct 12 06:28 hbase-client-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 354845 Oct 12 06:28 hbase-common-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 132690 Oct 12 06:28 hbase-common-0.96.0-hadoop2-tests.jar
-rw-r--r-- 1 hadoop users 97428 Oct 12 06:28 hbase-examples-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 72765 Oct 12 06:28 hbase-hadoop2-compat-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 32096 Oct 12 06:28 hbase-hadoop-compat-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 12174 Oct 12 06:28 hbase-it-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 288784 Oct 12 06:28 hbase-it-0.96.0-hadoop2-tests.jar
-rw-r--r-- 1 hadoop users 94784 Oct 12 06:28 hbase-prefix-tree-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 3134214 Oct 12 06:28 hbase-protocol-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 3058804 Oct 12 06:28 hbase-server-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 3150292 Oct 12 06:28 hbase-server-0.96.0-hadoop2-tests.jar
-rw-r--r-- 1 hadoop users 12554 Oct 12 06:28 hbase-shell-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 10941 Oct 12 06:28 hbase-testing-util-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 2276333 Oct 12 06:28 hbase-thrift-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 95975 Sep 17 16:15 high-scale-lib-1.1.1.jar
-rw-r--r-- 1 hadoop users 31020 Sep 17 16:14 htrace-core-2.01.jar
-rw-r--r-- 1 hadoop users 352585 Sep 17 16:15 httpclient-4.1.3.jar
-rw-r--r-- 1 hadoop users 181201 Sep 17 16:15 httpcore-4.1.3.jar
-rw-r--r-- 1 hadoop users 227517 Sep 17 16:13 jackson-core-asl-1.8.8.jar
-rw-r--r-- 1 hadoop users 17884 Sep 17 16:13 jackson-jaxrs-1.8.8.jar
-rw-r--r-- 1 hadoop users 669065 Sep 17 16:13 jackson-mapper-asl-1.8.8.jar
-rw-r--r-- 1 hadoop users 32353 Sep 17 16:13 jackson-xc-1.8.8.jar
-rw-r--r-- 1 hadoop users 20642 Sep 17 16:15 jamon-runtime-2.3.1.jar
-rw-r--r-- 1 hadoop users 408133 Sep 17 16:13 jasper-compiler-5.5.23.jar
-rw-r--r-- 1 hadoop users 76844 Sep 17 16:13 jasper-runtime-5.5.23.jar
-rw-r--r-- 1 hadoop users 2497 Sep 17 23:29 javax.inject-1.jar
-rw-r--r-- 1 hadoop users 83586 Sep 17 23:29 javax.servlet-3.0.jar
-rw-r--r-- 1 hadoop users 105134 Sep 17 16:27 jaxb-api-2.2.2.jar
-rw-r--r-- 1 hadoop users 890168 Sep 17 16:13 jaxb-impl-2.2.3-1.jar
-rw-r--r-- 1 hadoop users 129217 Sep 17 23:29 jersey-client-1.8.jar
-rw-r--r-- 1 hadoop users 458233 Sep 17 16:13 jersey-core-1.8.jar
-rw-r--r-- 1 hadoop users 17585 Sep 17 23:29 jersey-grizzly2-1.8.jar
-rw-r--r-- 1 hadoop users 14712 Sep 17 23:29 jersey-guice-1.8.jar
-rw-r--r-- 1 hadoop users 147933 Sep 17 16:13 jersey-json-1.8.jar
-rw-r--r-- 1 hadoop users 694352 Sep 17 16:13 jersey-server-1.8.jar
-rw-r--r-- 1 hadoop users 28034 Sep 17 23:29 jersey-test-framework-core-1.8.jar
-rw-r--r-- 1 hadoop users 12907 Sep 17 23:29 jersey-test-framework-grizzly2-1.8.jar
-rw-r--r-- 1 hadoop users 321806 Sep 17 16:27 jets3t-0.6.1.jar
-rw-r--r-- 1 hadoop users 75963 Sep 17 16:13 jettison-1.3.1.jar
-rw-r--r-- 1 hadoop users 539912 Sep 17 16:13 jetty-6.1.26.jar
-rw-r--r-- 1 hadoop users 18891 Sep 17 16:15 jetty-sslengine-6.1.26.jar
-rw-r--r-- 1 hadoop users 177131 Sep 17 16:13 jetty-util-6.1.26.jar
-rw-r--r-- 1 hadoop users 13832273 Sep 17 16:15 jruby-complete-1.6.8.jar
-rw-r--r-- 1 hadoop users 185746 Sep 17 16:27 jsch-0.1.42.jar
-rw-r--r-- 1 hadoop users 1024680 Sep 17 16:13 jsp-2.1-6.1.14.jar
-rw-r--r-- 1 hadoop users 134910 Sep 17 16:13 jsp-api-2.1-6.1.14.jar
-rw-r--r-- 1 hadoop users 100636 Sep 17 16:27 jsp-api-2.1.jar
-rw-r--r-- 1 hadoop users 33015 Sep 17 16:13 jsr305-1.3.9.jar
-rw-r--r-- 1 hadoop users 245039 Sep 17 16:12 junit-4.11.jar
-rw-r--r-- 1 hadoop users 347531 Sep 17 16:15 libthrift-0.9.0.jar
-rw-r--r-- 1 hadoop users 489884 Sep 17 16:12 log4j-1.2.17.jar
-rw-r--r-- 1 hadoop users 42212 Sep 17 23:29 management-api-3.0.0-b012.jar
-rw-r--r-- 1 hadoop users 82445 Sep 17 16:14 metrics-core-2.1.2.jar
drwxr-xr-x 3 hadoop users 4096 Nov 21 09:10 native
-rw-r--r-- 1 hadoop users 1206119 Sep 18 04:00 netty-3.6.6.Final.jar
-rw-r--r-- 1 hadoop users 29555 Sep 17 16:27 paranamer-2.3.jar
-rw-r--r-- 1 hadoop users 533455 Sep 17 16:13 protobuf-java-2.5.0.jar
drwxr-xr-x 5 hadoop users 4096 Sep 28 10:37 ruby
-rw-r--r-- 1 hadoop users 132368 Sep 17 16:13 servlet-api-2.5-6.1.14.jar
-rw-r--r-- 1 hadoop users 105112 Sep 17 16:12 servlet-api-2.5.jar
-rw-r--r-- 1 hadoop users 25962 Sep 17 16:14 slf4j-api-1.6.4.jar
-rw-r--r-- 1 hadoop users 9748 Oct 3 07:15 slf4j-log4j12-1.6.4.jar
-rw-r--r-- 1 hadoop users 995720 Sep 17 16:27 snappy-java-1.0.3.2.jar
-rw-r--r-- 1 hadoop users 26514 Sep 17 16:13 stax-api-1.0.1.jar
-rw-r--r-- 1 hadoop users 15010 Sep 17 16:13 xmlenc-0.52.jar
-rw-r--r-- 1 hadoop users 94672 Sep 17 16:27 xz-1.0.jar
-rw-r--r-- 1 hadoop users 779974 Sep 17 16:14 zookeeper-3.4.5.jar
Best regards,
Henry
-----Original Message-----
From: MA11 YTHung1
Sent: Thursday, November 21, 2013 9:51 AM
To: user@hbase.apache.org
Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.
I'm using hadoop-2.2.0 stable
-----Original Message-----
From: Jimmy Xiang [mailto:jxiang@cloudera.com]
Sent: Thursday, November 21, 2013 9:49 AM
To: user
Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.
Which version of Hadoop do you use?
On Wed, Nov 20, 2013 at 5:43 PM, Henry Hung <YT...@winbond.com> wrote:
> Hi All,
>
> When stopping master or regionserver, I found some ERROR and WARN in
> the log files, are these errors can cause problem in hbase:
>
> 13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
> 13/11/21 09:35:36 ERROR ipc.RPC: RPC.stopProxy called on non proxy.
> java.lang.IllegalArgumentException: object is not an instance of
> declaring class
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
> at $Proxy18.close(Unknown Source)
> at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:621)
> at
> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
> at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
> at
> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> at
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
> at
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.j
> ava:54)
> 13/11/21 09:35:36 WARN util.ShutdownHookManager: ShutdownHook
> 'ClientFinalizer' failed, org.apache.hadoop.HadoopIllegalArgumentException:
> Cannot close proxy - is not Closeable or does not provide closeable
> invocation handler class $Proxy18
> org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy -
> is not Closeable or does not provide closeable invocation handler
> class
> $Proxy18
> at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:639)
> at
> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
> at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
> at
> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> at
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
> at
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.j
> ava:54)
>
> Best regards,
> Henry
>
> ________________________________
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original
> sender of this email. If you are not the addressee indicated in this
> email or are not responsible for delivery of the email to such a
> person, please kindly reply to the sender indicating this fact and
> delete all copies of it from your computer and network server
> immediately. Your cooperation is highly appreciated. It is advised
> that any unauthorized use of confidential information of Winbond is
> strictly prohibited; and any information in this email irrelevant to
> the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
>
The privileged confidential information contained in this email is intended for use only by the addressees as indicated by the original sender of this email. If you are not the addressee indicated in this email or are not responsible for delivery of the email to such a person, please kindly reply to the sender indicating this fact and delete all copies of it from your computer and network server immediately. Your cooperation is highly appreciated. It is advised that any unauthorized use of confidential information of Winbond is strictly prohibited; and any information in this email irrelevant to the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
The privileged confidential information contained in this email is intended for use only by the addressees as indicated by the original sender of this email. If you are not the addressee indicated in this email or are not responsible for delivery of the email to such a person, please kindly reply to the sender indicating this fact and delete all copies of it from your computer and network server immediately. Your cooperation is highly appreciated. It is advised that any unauthorized use of confidential information of Winbond is strictly prohibited; and any information in this email irrelevant to the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
The privileged confidential information contained in this email is intended for use only by the addressees as indicated by the original sender of this email. If you are not the addressee indicated in this email or are not responsible for delivery of the email to such a person, please kindly reply to the sender indicating this fact and delete all copies of it from your computer and network server immediately. Your cooperation is highly appreciated. It is advised that any unauthorized use of confidential information of Winbond is strictly prohibited; and any information in this email irrelevant to the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
RE: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
called on non proxy.
Posted by Henry Hung <YT...@winbond.com>.
Additional information:
I replace all files with prefix hadoop in hbase-0.96.0-hadoop2/lib with hadoop-2.2.0 libraries.
the ls -l of hbase-0.96.0-hadoop2/lib as below:
-rw-r--r-- 1 hadoop users 62983 Sep 17 16:13 activation-1.1.jar
-rw-r--r-- 1 hadoop users 4467 Sep 17 23:29 aopalliance-1.0.jar
-rw-r--r-- 1 hadoop users 43033 Sep 17 16:13 asm-3.1.jar
-rw-r--r-- 1 hadoop users 263268 Sep 17 16:27 avro-1.5.3.jar
-rw-r--r-- 1 hadoop users 188671 Sep 17 16:12 commons-beanutils-1.7.0.jar
-rw-r--r-- 1 hadoop users 206035 Sep 17 16:13 commons-beanutils-core-1.8.0.jar
-rw-r--r-- 1 hadoop users 41123 Sep 17 16:12 commons-cli-1.2.jar
-rw-r--r-- 1 hadoop users 259600 Sep 17 16:13 commons-codec-1.7.jar
-rw-r--r-- 1 hadoop users 575389 Sep 17 16:12 commons-collections-3.2.1.jar
-rw-r--r-- 1 hadoop users 238681 Sep 17 16:27 commons-compress-1.4.jar
-rw-r--r-- 1 hadoop users 298829 Sep 17 16:13 commons-configuration-1.6.jar
-rw-r--r-- 1 hadoop users 24239 Sep 17 23:28 commons-daemon-1.0.13.jar
-rw-r--r-- 1 hadoop users 143602 Sep 17 16:12 commons-digester-1.8.jar
-rw-r--r-- 1 hadoop users 112341 Sep 17 16:13 commons-el-1.0.jar
-rw-r--r-- 1 hadoop users 305001 Sep 17 16:12 commons-httpclient-3.1.jar
-rw-r--r-- 1 hadoop users 185140 Sep 17 16:13 commons-io-2.4.jar
-rw-r--r-- 1 hadoop users 284220 Sep 17 16:12 commons-lang-2.6.jar
-rw-r--r-- 1 hadoop users 60686 Sep 17 16:12 commons-logging-1.1.1.jar
-rw-r--r-- 1 hadoop users 988514 Sep 17 16:13 commons-math-2.2.jar
-rw-r--r-- 1 hadoop users 273370 Sep 17 16:27 commons-net-3.1.jar
-rw-r--r-- 1 hadoop users 3566844 Sep 17 16:15 core-3.1.1.jar
-rw-r--r-- 1 hadoop users 15322 Sep 17 16:12 findbugs-annotations-1.3.9-1.jar
-rw-r--r-- 1 hadoop users 21817 Sep 17 23:29 gmbal-api-only-3.0.0-b023.jar
-rw-r--r-- 1 hadoop users 684337 Sep 17 23:29 grizzly-framework-2.1.1.jar
-rw-r--r-- 1 hadoop users 210846 Sep 17 23:29 grizzly-framework-2.1.1-tests.jar
-rw-r--r-- 1 hadoop users 248346 Sep 17 23:29 grizzly-http-2.1.1.jar
-rw-r--r-- 1 hadoop users 193583 Sep 17 23:29 grizzly-http-server-2.1.1.jar
-rw-r--r-- 1 hadoop users 336878 Sep 17 23:29 grizzly-http-servlet-2.1.1.jar
-rw-r--r-- 1 hadoop users 8072 Sep 17 23:29 grizzly-rcm-2.1.1.jar
-rw-r--r-- 1 hadoop users 1795932 Sep 17 16:13 guava-12.0.1.jar
-rw-r--r-- 1 hadoop users 710492 Sep 17 23:29 guice-3.0.jar
-rw-r--r-- 1 hadoop users 65012 Sep 17 23:29 guice-servlet-3.0.jar
-rw-r--r-- 1 hadoop users 16778 Nov 20 17:39 hadoop-annotations-2.2.0.jar
-rw-r--r-- 1 hadoop users 49750 Nov 20 17:40 hadoop-auth-2.2.0.jar
-rw-r--r-- 1 hadoop users 2576 Oct 12 06:20 hadoop-client-2.1.0-beta.jar
-rw-r--r-- 1 hadoop users 2735584 Nov 20 17:50 hadoop-common-2.2.0.jar
-rw-r--r-- 1 hadoop users 5242252 Nov 21 08:48 hadoop-hdfs-2.2.0.jar
-rw-r--r-- 1 hadoop users 1988460 Nov 21 08:48 hadoop-hdfs-2.2.0-tests.jar
-rw-r--r-- 1 hadoop users 482042 Nov 21 08:49 hadoop-mapreduce-client-app-2.2.0.jar
-rw-r--r-- 1 hadoop users 656365 Nov 21 08:49 hadoop-mapreduce-client-common-2.2.0.jar
-rw-r--r-- 1 hadoop users 1455001 Nov 21 08:50 hadoop-mapreduce-client-core-2.2.0.jar
-rw-r--r-- 1 hadoop users 35216 Nov 21 08:50 hadoop-mapreduce-client-jobclient-2.2.0.jar
-rw-r--r-- 1 hadoop users 1434852 Nov 21 08:50 hadoop-mapreduce-client-jobclient-2.2.0-tests.jar
-rw-r--r-- 1 hadoop users 21537 Nov 21 08:51 hadoop-mapreduce-client-shuffle-2.2.0.jar
-rw-r--r-- 1 hadoop users 1158936 Nov 21 08:51 hadoop-yarn-api-2.2.0.jar
-rw-r--r-- 1 hadoop users 94728 Nov 21 08:51 hadoop-yarn-client-2.2.0.jar
-rw-r--r-- 1 hadoop users 1301627 Nov 21 08:51 hadoop-yarn-common-2.2.0.jar
-rw-r--r-- 1 hadoop users 175554 Nov 21 08:52 hadoop-yarn-server-common-2.2.0.jar
-rw-r--r-- 1 hadoop users 467638 Nov 21 08:52 hadoop-yarn-server-nodemanager-2.2.0.jar
-rw-r--r-- 1 hadoop users 825853 Oct 12 06:28 hbase-client-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 354845 Oct 12 06:28 hbase-common-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 132690 Oct 12 06:28 hbase-common-0.96.0-hadoop2-tests.jar
-rw-r--r-- 1 hadoop users 97428 Oct 12 06:28 hbase-examples-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 72765 Oct 12 06:28 hbase-hadoop2-compat-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 32096 Oct 12 06:28 hbase-hadoop-compat-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 12174 Oct 12 06:28 hbase-it-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 288784 Oct 12 06:28 hbase-it-0.96.0-hadoop2-tests.jar
-rw-r--r-- 1 hadoop users 94784 Oct 12 06:28 hbase-prefix-tree-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 3134214 Oct 12 06:28 hbase-protocol-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 3058804 Oct 12 06:28 hbase-server-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 3150292 Oct 12 06:28 hbase-server-0.96.0-hadoop2-tests.jar
-rw-r--r-- 1 hadoop users 12554 Oct 12 06:28 hbase-shell-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 10941 Oct 12 06:28 hbase-testing-util-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 2276333 Oct 12 06:28 hbase-thrift-0.96.0-hadoop2.jar
-rw-r--r-- 1 hadoop users 95975 Sep 17 16:15 high-scale-lib-1.1.1.jar
-rw-r--r-- 1 hadoop users 31020 Sep 17 16:14 htrace-core-2.01.jar
-rw-r--r-- 1 hadoop users 352585 Sep 17 16:15 httpclient-4.1.3.jar
-rw-r--r-- 1 hadoop users 181201 Sep 17 16:15 httpcore-4.1.3.jar
-rw-r--r-- 1 hadoop users 227517 Sep 17 16:13 jackson-core-asl-1.8.8.jar
-rw-r--r-- 1 hadoop users 17884 Sep 17 16:13 jackson-jaxrs-1.8.8.jar
-rw-r--r-- 1 hadoop users 669065 Sep 17 16:13 jackson-mapper-asl-1.8.8.jar
-rw-r--r-- 1 hadoop users 32353 Sep 17 16:13 jackson-xc-1.8.8.jar
-rw-r--r-- 1 hadoop users 20642 Sep 17 16:15 jamon-runtime-2.3.1.jar
-rw-r--r-- 1 hadoop users 408133 Sep 17 16:13 jasper-compiler-5.5.23.jar
-rw-r--r-- 1 hadoop users 76844 Sep 17 16:13 jasper-runtime-5.5.23.jar
-rw-r--r-- 1 hadoop users 2497 Sep 17 23:29 javax.inject-1.jar
-rw-r--r-- 1 hadoop users 83586 Sep 17 23:29 javax.servlet-3.0.jar
-rw-r--r-- 1 hadoop users 105134 Sep 17 16:27 jaxb-api-2.2.2.jar
-rw-r--r-- 1 hadoop users 890168 Sep 17 16:13 jaxb-impl-2.2.3-1.jar
-rw-r--r-- 1 hadoop users 129217 Sep 17 23:29 jersey-client-1.8.jar
-rw-r--r-- 1 hadoop users 458233 Sep 17 16:13 jersey-core-1.8.jar
-rw-r--r-- 1 hadoop users 17585 Sep 17 23:29 jersey-grizzly2-1.8.jar
-rw-r--r-- 1 hadoop users 14712 Sep 17 23:29 jersey-guice-1.8.jar
-rw-r--r-- 1 hadoop users 147933 Sep 17 16:13 jersey-json-1.8.jar
-rw-r--r-- 1 hadoop users 694352 Sep 17 16:13 jersey-server-1.8.jar
-rw-r--r-- 1 hadoop users 28034 Sep 17 23:29 jersey-test-framework-core-1.8.jar
-rw-r--r-- 1 hadoop users 12907 Sep 17 23:29 jersey-test-framework-grizzly2-1.8.jar
-rw-r--r-- 1 hadoop users 321806 Sep 17 16:27 jets3t-0.6.1.jar
-rw-r--r-- 1 hadoop users 75963 Sep 17 16:13 jettison-1.3.1.jar
-rw-r--r-- 1 hadoop users 539912 Sep 17 16:13 jetty-6.1.26.jar
-rw-r--r-- 1 hadoop users 18891 Sep 17 16:15 jetty-sslengine-6.1.26.jar
-rw-r--r-- 1 hadoop users 177131 Sep 17 16:13 jetty-util-6.1.26.jar
-rw-r--r-- 1 hadoop users 13832273 Sep 17 16:15 jruby-complete-1.6.8.jar
-rw-r--r-- 1 hadoop users 185746 Sep 17 16:27 jsch-0.1.42.jar
-rw-r--r-- 1 hadoop users 1024680 Sep 17 16:13 jsp-2.1-6.1.14.jar
-rw-r--r-- 1 hadoop users 134910 Sep 17 16:13 jsp-api-2.1-6.1.14.jar
-rw-r--r-- 1 hadoop users 100636 Sep 17 16:27 jsp-api-2.1.jar
-rw-r--r-- 1 hadoop users 33015 Sep 17 16:13 jsr305-1.3.9.jar
-rw-r--r-- 1 hadoop users 245039 Sep 17 16:12 junit-4.11.jar
-rw-r--r-- 1 hadoop users 347531 Sep 17 16:15 libthrift-0.9.0.jar
-rw-r--r-- 1 hadoop users 489884 Sep 17 16:12 log4j-1.2.17.jar
-rw-r--r-- 1 hadoop users 42212 Sep 17 23:29 management-api-3.0.0-b012.jar
-rw-r--r-- 1 hadoop users 82445 Sep 17 16:14 metrics-core-2.1.2.jar
drwxr-xr-x 3 hadoop users 4096 Nov 21 09:10 native
-rw-r--r-- 1 hadoop users 1206119 Sep 18 04:00 netty-3.6.6.Final.jar
-rw-r--r-- 1 hadoop users 29555 Sep 17 16:27 paranamer-2.3.jar
-rw-r--r-- 1 hadoop users 533455 Sep 17 16:13 protobuf-java-2.5.0.jar
drwxr-xr-x 5 hadoop users 4096 Sep 28 10:37 ruby
-rw-r--r-- 1 hadoop users 132368 Sep 17 16:13 servlet-api-2.5-6.1.14.jar
-rw-r--r-- 1 hadoop users 105112 Sep 17 16:12 servlet-api-2.5.jar
-rw-r--r-- 1 hadoop users 25962 Sep 17 16:14 slf4j-api-1.6.4.jar
-rw-r--r-- 1 hadoop users 9748 Oct 3 07:15 slf4j-log4j12-1.6.4.jar
-rw-r--r-- 1 hadoop users 995720 Sep 17 16:27 snappy-java-1.0.3.2.jar
-rw-r--r-- 1 hadoop users 26514 Sep 17 16:13 stax-api-1.0.1.jar
-rw-r--r-- 1 hadoop users 15010 Sep 17 16:13 xmlenc-0.52.jar
-rw-r--r-- 1 hadoop users 94672 Sep 17 16:27 xz-1.0.jar
-rw-r--r-- 1 hadoop users 779974 Sep 17 16:14 zookeeper-3.4.5.jar
Best regards,
Henry
-----Original Message-----
From: MA11 YTHung1
Sent: Thursday, November 21, 2013 9:51 AM
To: user@hbase.apache.org
Subject: RE: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.
I'm using hadoop-2.2.0 stable
-----Original Message-----
From: Jimmy Xiang [mailto:jxiang@cloudera.com]
Sent: Thursday, November 21, 2013 9:49 AM
To: user
Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.
Which version of Hadoop do you use?
On Wed, Nov 20, 2013 at 5:43 PM, Henry Hung <YT...@winbond.com> wrote:
> Hi All,
>
> When stopping master or regionserver, I found some ERROR and WARN in
> the log files, are these errors can cause problem in hbase:
>
> 13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
> 13/11/21 09:35:36 ERROR ipc.RPC: RPC.stopProxy called on non proxy.
> java.lang.IllegalArgumentException: object is not an instance of
> declaring class
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
> at $Proxy18.close(Unknown Source)
> at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:621)
> at
> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
> at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
> at
> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> at
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
> at
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.j
> ava:54)
> 13/11/21 09:35:36 WARN util.ShutdownHookManager: ShutdownHook
> 'ClientFinalizer' failed, org.apache.hadoop.HadoopIllegalArgumentException:
> Cannot close proxy - is not Closeable or does not provide closeable
> invocation handler class $Proxy18
> org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy -
> is not Closeable or does not provide closeable invocation handler
> class
> $Proxy18
> at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:639)
> at
> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
> at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
> at
> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> at
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
> at
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.j
> ava:54)
>
> Best regards,
> Henry
>
> ________________________________
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original
> sender of this email. If you are not the addressee indicated in this
> email or are not responsible for delivery of the email to such a
> person, please kindly reply to the sender indicating this fact and
> delete all copies of it from your computer and network server
> immediately. Your cooperation is highly appreciated. It is advised
> that any unauthorized use of confidential information of Winbond is
> strictly prohibited; and any information in this email irrelevant to
> the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
>
The privileged confidential information contained in this email is intended for use only by the addressees as indicated by the original sender of this email. If you are not the addressee indicated in this email or are not responsible for delivery of the email to such a person, please kindly reply to the sender indicating this fact and delete all copies of it from your computer and network server immediately. Your cooperation is highly appreciated. It is advised that any unauthorized use of confidential information of Winbond is strictly prohibited; and any information in this email irrelevant to the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
The privileged confidential information contained in this email is intended for use only by the addressees as indicated by the original sender of this email. If you are not the addressee indicated in this email or are not responsible for delivery of the email to such a person, please kindly reply to the sender indicating this fact and delete all copies of it from your computer and network server immediately. Your cooperation is highly appreciated. It is advised that any unauthorized use of confidential information of Winbond is strictly prohibited; and any information in this email irrelevant to the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
RE: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
called on non proxy.
Posted by Henry Hung <YT...@winbond.com>.
I'm using hadoop-2.2.0 stable
-----Original Message-----
From: Jimmy Xiang [mailto:jxiang@cloudera.com]
Sent: Thursday, November 21, 2013 9:49 AM
To: user
Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called on non proxy.
Which version of Hadoop do you use?
On Wed, Nov 20, 2013 at 5:43 PM, Henry Hung <YT...@winbond.com> wrote:
> Hi All,
>
> When stopping master or regionserver, I found some ERROR and WARN in
> the log files, are these errors can cause problem in hbase:
>
> 13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
> 13/11/21 09:35:36 ERROR ipc.RPC: RPC.stopProxy called on non proxy.
> java.lang.IllegalArgumentException: object is not an instance of
> declaring class
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
> at $Proxy18.close(Unknown Source)
> at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:621)
> at
> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
> at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
> at
> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> at
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
> at
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.j
> ava:54)
> 13/11/21 09:35:36 WARN util.ShutdownHookManager: ShutdownHook
> 'ClientFinalizer' failed, org.apache.hadoop.HadoopIllegalArgumentException:
> Cannot close proxy - is not Closeable or does not provide closeable
> invocation handler class $Proxy18
> org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy -
> is not Closeable or does not provide closeable invocation handler
> class
> $Proxy18
> at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:639)
> at
> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
> at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
> at
> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> at
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
> at
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.j
> ava:54)
>
> Best regards,
> Henry
>
> ________________________________
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original
> sender of this email. If you are not the addressee indicated in this
> email or are not responsible for delivery of the email to such a
> person, please kindly reply to the sender indicating this fact and
> delete all copies of it from your computer and network server
> immediately. Your cooperation is highly appreciated. It is advised
> that any unauthorized use of confidential information of Winbond is
> strictly prohibited; and any information in this email irrelevant to
> the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
>
The privileged confidential information contained in this email is intended for use only by the addressees as indicated by the original sender of this email. If you are not the addressee indicated in this email or are not responsible for delivery of the email to such a person, please kindly reply to the sender indicating this fact and delete all copies of it from your computer and network server immediately. Your cooperation is highly appreciated. It is advised that any unauthorized use of confidential information of Winbond is strictly prohibited; and any information in this email irrelevant to the official business of Winbond shall be deemed as neither given nor endorsed by Winbond.
Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy
called on non proxy.
Posted by Jimmy Xiang <jx...@cloudera.com>.
Which version of Hadoop do you use?
On Wed, Nov 20, 2013 at 5:43 PM, Henry Hung <YT...@winbond.com> wrote:
> Hi All,
>
> When stopping master or regionserver, I found some ERROR and WARN in the
> log files, are these errors can cause problem in hbase:
>
> 13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
> 13/11/21 09:35:36 ERROR ipc.RPC: RPC.stopProxy called on non proxy.
> java.lang.IllegalArgumentException: object is not an instance of declaring
> class
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
> at $Proxy18.close(Unknown Source)
> at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:621)
> at
> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
> at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
> at
> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> at
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
> at
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
> 13/11/21 09:35:36 WARN util.ShutdownHookManager: ShutdownHook
> 'ClientFinalizer' failed, org.apache.hadoop.HadoopIllegalArgumentException:
> Cannot close proxy - is not Closeable or does not provide closeable
> invocation handler class $Proxy18
> org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is
> not Closeable or does not provide closeable invocation handler class
> $Proxy18
> at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:639)
> at
> org.apache.hadoop.hdfs.DFSClient.closeConnectionToNamenode(DFSClient.java:738)
> at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:794)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:847)
> at
> org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2524)
> at
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2541)
> at
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
>
> Best regards,
> Henry
>
> ________________________________
> The privileged confidential information contained in this email is
> intended for use only by the addressees as indicated by the original sender
> of this email. If you are not the addressee indicated in this email or are
> not responsible for delivery of the email to such a person, please kindly
> reply to the sender indicating this fact and delete all copies of it from
> your computer and network server immediately. Your cooperation is highly
> appreciated. It is advised that any unauthorized use of confidential
> information of Winbond is strictly prohibited; and any information in this
> email irrelevant to the official business of Winbond shall be deemed as
> neither given nor endorsed by Winbond.
>