You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Lin Yiqun (JIRA)" <ji...@apache.org> on 2016/03/04 08:16:40 UTC
[jira] [Updated] (HADOOP-12887) RetryInvocationHandler failedRetry
exception logging's level is not correct
[ https://issues.apache.org/jira/browse/HADOOP-12887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Lin Yiqun updated HADOOP-12887:
-------------------------------
Status: Patch Available (was: Open)
Attach a simple patch.
> RetryInvocationHandler failedRetry exception logging's level is not correct
> ---------------------------------------------------------------------------
>
> Key: HADOOP-12887
> URL: https://issues.apache.org/jira/browse/HADOOP-12887
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 2.7.1
> Reporter: Lin Yiqun
> Assignee: Lin Yiqun
> Priority: Minor
>
> I used dfsadmin command to list dirs/files info that I wanted to search. But I found a {{info level}} error that showed that there was a remoteException.
> {code}
> $ hadoop fs -ls /
> 2016-03-04 14:52:24,710 INFO [main] (RetryInvocationHandler.java:140) - Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over xxxx/xx.xx.xx.xx:9000 after 1 fail over attempts. Trying to fail over after sleeping for 1095ms.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby
> at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
> at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1774)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3856)
> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1008)
> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:843)
> at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> {code}
> Finally I find the reason is that my hdfs ha configuration is not correct. But it seems that is a problem that exception message is info level.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)