You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Akmal Abbasov <ak...@icloud.com> on 2015/05/03 16:53:52 UTC

exportSnapshot tool

Hi, 
I using exportSnapshot tool, and observed a strange behaviour. I have HBase HA configured in my destination cluster. I have hbm1 and hbm2 which are HBase masters.
Currently hbm2 is active, and hbm1 is in standby mode. I am assuming that when I am using exportSnapshot tool I need to specify the address of the server where my active Hbase master is running.
But, when I do this, I am getting 
Exception in thread "main" org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1688)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1258)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3684)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:803)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:779)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

	at org.apache.hadoop.ipc.Client.call(Client.java:1411)
	at org.apache.hadoop.ipc.Client.call(Client.java:1364)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
	at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
	at org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:870)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:991)
	at org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:995)

But, when I try with the standby hbase master, everything is working. 
Is it the correct way of working?
Thank you.

Regards,
Akmal Abbasov

Re:Re: exportSnapshot tool

Posted by David chen <c7...@163.com>.
Your active namenode and hbase master share on a host? If so, it's okay.
I don't think your configuration is problematic, the error you posted should be related with HDFS HA,  but not HBase HA. 
I ever encountered the error, but there were both standby  namenode in my current scenario.  The error was solved by the way to initialize the HDFS  HA state in ZooKeeper, so, i suggest you can attempt the way.

Re: exportSnapshot tool

Posted by Akmal Abbasov <ak...@icloud.com>.
Hi David,
I have HDFS HA. I was supposing that maybe the namenode with an active hbase master is not active. But no, both namenode and hbase master are active in hb1m.
So is it supposed to work in that way, or it is a problem with my configurations?
Thank you.

Regards,
Akmal Abbasov
> On 04 May 2015, at 06:20, David chen <c7...@163.com> wrote:
> 
> Maybe you should initialize the HDFS HA state in ZooKeeper, or execute 'hdfs zkfc -formatZK'


Re:Re: exportSnapshot tool

Posted by David chen <c7...@163.com>.
Maybe you should initialize the HDFS HA state in ZooKeeper, or execute 'hdfs zkfc -formatZK'

Re: exportSnapshot tool

Posted by Akmal Abbasov <ak...@icloud.com>.
Hi Ted,
Yes, I can confirm that hb1m is active
I tested it using ./hbase-jruby get-active-master.rb which is in HBASE_DIR/bin/
And also I can see that hb1m is an active one from my dashboard.
Thank you.

Regards,
Akmal Abbasov

> On 03 May 2015, at 17:17, Ted Yu <yu...@gmail.com> wrote:
> 
> bq. Operation category READ is not supported in state standby
> 
> Can you confirm whether active namenode is running on hb1m ?
> 
> Cheers
> 
> On Sun, May 3, 2015 at 8:00 AM, Akmal Abbasov <ak...@icloud.com>
> wrote:
> 
>> Hi Ted,
>> I am using hadoop-2.5.1 and hbase-0.98.7-hadoop2.
>> The command for snapshot export is:
>> hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot snappy
>> -copy-to hdfs://hb1m/hbase -overwrite
>> Thank you
>> 
>> Regards,
>> Akmal Abbasov
>> 
>>> On 03 May 2015, at 16:57, Ted Yu <yu...@gmail.com> wrote:
>>> 
>>> Can you give us a bit more information ?
>>> Such as:
>>> release of hbase you're using
>>> release of hadoop you're using
>>> the command line for snapshot export
>>> 
>>> Thanks
>>> 
>>> On Sun, May 3, 2015 at 7:53 AM, Akmal Abbasov <ak...@icloud.com>
>>> wrote:
>>> 
>>>> Hi,
>>>> I using exportSnapshot tool, and observed a strange behaviour. I have
>>>> HBase HA configured in my destination cluster. I have hbm1 and hbm2
>> which
>>>> are HBase masters.
>>>> Currently hbm2 is active, and hbm1 is in standby mode. I am assuming
>> that
>>>> when I am using exportSnapshot tool I need to specify the address of the
>>>> server where my active Hbase master is running.
>>>> But, when I do this, I am getting
>>>> Exception in thread "main"
>>>> 
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>>>> Operation category READ is not supported in state standby
>>>>       at
>>>> 
>> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
>>>>       at
>>>> 
>> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1688)
>>>>       at
>>>> 
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1258)
>>>>       at
>>>> 
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3684)
>>>>       at
>>>> 
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:803)
>>>>       at
>>>> 
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:779)
>>>>       at
>>>> 
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>>>       at
>>>> 
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>>>>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>>>>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
>>>>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
>>>>       at java.security.AccessController.doPrivileged(Native Method)
>>>>       at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>       at
>>>> 
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>>>>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
>>>> 
>>>>       at org.apache.hadoop.ipc.Client.call(Client.java:1411)
>>>>       at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>>>>       at
>>>> 
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>>>>       at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
>>>>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>       at
>>>> 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>       at
>>>> 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>       at java.lang.reflect.Method.invoke(Method.java:606)
>>>>       at
>>>> 
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>>>>       at
>>>> 
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>>>>       at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
>>>>       at
>>>> 
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707)
>>>>       at
>>>> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785)
>>>>       at
>>>> 
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068)
>>>>       at
>>>> 
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
>>>>       at
>>>> 
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>>>       at
>>>> 
>> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
>>>>       at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
>>>>       at
>>>> 
>> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:870)
>>>>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>>>       at
>>>> 
>> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:991)
>>>>       at
>>>> 
>> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:995)
>>>> 
>>>> But, when I try with the standby hbase master, everything is working.
>>>> Is it the correct way of working?
>>>> Thank you.
>>>> 
>>>> Regards,
>>>> Akmal Abbasov
>> 
>> 


Re: exportSnapshot tool

Posted by Ted Yu <yu...@gmail.com>.
bq. Operation category READ is not supported in state standby

Can you confirm whether active namenode is running on hb1m ?

Cheers

On Sun, May 3, 2015 at 8:00 AM, Akmal Abbasov <ak...@icloud.com>
wrote:

> Hi Ted,
> I am using hadoop-2.5.1 and hbase-0.98.7-hadoop2.
> The command for snapshot export is:
> hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot snappy
> -copy-to hdfs://hb1m/hbase -overwrite
> Thank you
>
> Regards,
> Akmal Abbasov
>
> > On 03 May 2015, at 16:57, Ted Yu <yu...@gmail.com> wrote:
> >
> > Can you give us a bit more information ?
> > Such as:
> > release of hbase you're using
> > release of hadoop you're using
> > the command line for snapshot export
> >
> > Thanks
> >
> > On Sun, May 3, 2015 at 7:53 AM, Akmal Abbasov <ak...@icloud.com>
> > wrote:
> >
> >> Hi,
> >> I using exportSnapshot tool, and observed a strange behaviour. I have
> >> HBase HA configured in my destination cluster. I have hbm1 and hbm2
> which
> >> are HBase masters.
> >> Currently hbm2 is active, and hbm1 is in standby mode. I am assuming
> that
> >> when I am using exportSnapshot tool I need to specify the address of the
> >> server where my active Hbase master is running.
> >> But, when I do this, I am getting
> >> Exception in thread "main"
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
> >> Operation category READ is not supported in state standby
> >>        at
> >>
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
> >>        at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1688)
> >>        at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1258)
> >>        at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3684)
> >>        at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:803)
> >>        at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:779)
> >>        at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> >>        at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> >>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> >>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> >>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> >>        at java.security.AccessController.doPrivileged(Native Method)
> >>        at javax.security.auth.Subject.doAs(Subject.java:415)
> >>        at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
> >>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> >>
> >>        at org.apache.hadoop.ipc.Client.call(Client.java:1411)
> >>        at org.apache.hadoop.ipc.Client.call(Client.java:1364)
> >>        at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> >>        at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
> >>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>        at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >>        at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>        at java.lang.reflect.Method.invoke(Method.java:606)
> >>        at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> >>        at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> >>        at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
> >>        at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707)
> >>        at
> >> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785)
> >>        at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068)
> >>        at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
> >>        at
> >>
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> >>        at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
> >>        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
> >>        at
> >>
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:870)
> >>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> >>        at
> >>
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:991)
> >>        at
> >>
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:995)
> >>
> >> But, when I try with the standby hbase master, everything is working.
> >> Is it the correct way of working?
> >> Thank you.
> >>
> >> Regards,
> >> Akmal Abbasov
>
>

Re: exportSnapshot tool

Posted by Akmal Abbasov <ak...@icloud.com>.
Hi Ted,
I am using hadoop-2.5.1 and hbase-0.98.7-hadoop2.
The command for snapshot export is:
hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot snappy -copy-to hdfs://hb1m/hbase -overwrite
Thank you

Regards,
Akmal Abbasov

> On 03 May 2015, at 16:57, Ted Yu <yu...@gmail.com> wrote:
> 
> Can you give us a bit more information ?
> Such as:
> release of hbase you're using
> release of hadoop you're using
> the command line for snapshot export
> 
> Thanks
> 
> On Sun, May 3, 2015 at 7:53 AM, Akmal Abbasov <ak...@icloud.com>
> wrote:
> 
>> Hi,
>> I using exportSnapshot tool, and observed a strange behaviour. I have
>> HBase HA configured in my destination cluster. I have hbm1 and hbm2 which
>> are HBase masters.
>> Currently hbm2 is active, and hbm1 is in standby mode. I am assuming that
>> when I am using exportSnapshot tool I need to specify the address of the
>> server where my active Hbase master is running.
>> But, when I do this, I am getting
>> Exception in thread "main"
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>> Operation category READ is not supported in state standby
>>        at
>> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1688)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1258)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3684)
>>        at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:803)
>>        at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:779)
>>        at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>        at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
>>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:415)
>>        at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
>> 
>>        at org.apache.hadoop.ipc.Client.call(Client.java:1411)
>>        at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>>        at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>>        at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>        at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>        at java.lang.reflect.Method.invoke(Method.java:606)
>>        at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>>        at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>>        at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
>>        at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707)
>>        at
>> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785)
>>        at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068)
>>        at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
>>        at
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>        at
>> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
>>        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
>>        at
>> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:870)
>>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>        at
>> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:991)
>>        at
>> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:995)
>> 
>> But, when I try with the standby hbase master, everything is working.
>> Is it the correct way of working?
>> Thank you.
>> 
>> Regards,
>> Akmal Abbasov


Re: exportSnapshot tool

Posted by Ted Yu <yu...@gmail.com>.
Can you give us a bit more information ?
Such as:
release of hbase you're using
release of hadoop you're using
the command line for snapshot export

Thanks

On Sun, May 3, 2015 at 7:53 AM, Akmal Abbasov <ak...@icloud.com>
wrote:

> Hi,
> I using exportSnapshot tool, and observed a strange behaviour. I have
> HBase HA configured in my destination cluster. I have hbm1 and hbm2 which
> are HBase masters.
> Currently hbm2 is active, and hbm1 is in standby mode. I am assuming that
> when I am using exportSnapshot tool I need to specify the address of the
> server where my active Hbase master is running.
> But, when I do this, I am getting
> Exception in thread "main"
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
> Operation category READ is not supported in state standby
>         at
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1688)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1258)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3684)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:803)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:779)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1411)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>         at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707)
>         at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
>         at
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:870)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>         at
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:991)
>         at
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:995)
>
> But, when I try with the standby hbase master, everything is working.
> Is it the correct way of working?
> Thank you.
>
> Regards,
> Akmal Abbasov