You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by "shanmuganathan.r" <sh...@zohocorp.com> on 2011/08/11 20:19:21 UTC

Avatar namenode?

Hi All,

      I am running the HBase distributed mode in seven node cluster with backup master. The HBase is running properly in the backup master environment. I want to run this HBase on top of the High Availability Hadoop. I saw about Avatar node in the following link http://hadoopblog.blogspot.com/2010/02/hadoop-namenode-high-availability.html . 
      I need more help regarding this Avatar namenode configuration.


1. Which IP will be given to the Datanoe fs.default.name ?
2. Is there any good method other than avatar available for Backup namenode ?



Regards,

Shanmuganathan



RE: Not Starting Hmaster bacause of Permission denied

Posted by "Zhong, Andy" <Sh...@searshc.com>.
Yes, it works! Thanks, - Andy 

-----Original Message-----
From: Sonal Goyal [mailto:sonalgoyal4@gmail.com] 
Sent: Friday, August 19, 2011 11:49 AM
To: user@hbase.apache.org
Subject: Re: Not Starting Hmaster bacause of Permission denied

Hi Andy,

We run HBase under the same user, so unfortunately I havent seen your
issue.
But the hbase user definitely needs to own the hbase.rootdir, to be able
to write to it. Do your hbase and hadoop users belong to the same group
and is the activity dfs folder permissions set correctly for hbase user
to write to it?

I would try the following to rule out permission issues.

bin/hadoop fs -mkdir /activity
bin/hadoop fs -chown hbase /activity

And then start hbase.

If anyone on the list has seen this issue or has some other thoughts,
please share.

Best Regards,
Sonal
Crux: Reporting for HBase <https://github.com/sonalgoyal/crux>
Nube Technologies <http://www.nubetech.co>

<http://in.linkedin.com/in/sonalgoyal>





On Fri, Aug 19, 2011 at 8:57 PM, Zhong, Andy
<Sh...@searshc.com>wrote:

> Sonal,
>
> Humm... Thanks for your reply! But not aware of dfs security is being 
> used here (Hadoop0.20.2 + Hbase 0.20.3). Is this the top level 
> directory related to the following the hbase rootdir, and are you 
> suggest to create the following?
> bin/hadoop fs -mkdir /activity/hbase
> bin/hadoop fs -chown hbase /activity/hbase
>
> Below is the definition of the root dir:
> <name>hbase.rootdir</name>
> <value>hdfs://hadoopStressServer/activity/hbase</value>
>
>
> Regards,
> Andy Zhong
>
> -----Original Message-----
> From: Sonal Goyal [mailto:sonalgoyal4@gmail.com]
> Sent: Friday, August 19, 2011 3:10 AM
> To: user@hbase.apache.org
> Subject: Re: Not Starting Hmaster bacause of Permission denied
>
> Hi Andy,
>
> I guess you are using dfs security, in which case your hbase user does

> not have the permission to create the top level directory /hbase in 
> the dfs. Can you try the following and then start your master? Let us 
> know how it goes.
>
> bin/hadoop fs -mkdir /hbase
> bin/hadoop fs -chown hbase /hbase
>
>
> Best Regards,
> Sonal
> Crux: Reporting for HBase <https://github.com/sonalgoyal/crux>
> Nube Technologies <http://www.nubetech.co>
>
> <http://in.linkedin.com/in/sonalgoyal>
>
>
>
>
>
> On Fri, Aug 19, 2011 at 1:02 PM, Zhong, Andy
> <Sh...@searshc.com>wrote:
>
> > Hi All,
> > I am facing an issue to start Hmaster bacause of Permission denied, 
> > although two region servers seem start properly. I will be much 
> > appreciated if any one could help me to figure out the root cause. I

> > have hadoop running under 'hadoop' user, and Hbase running under
> 'hbase'
> > user. Thanks, - Andy Zhong
> >
> >
> > 2011-08-19 01:30:31,205 FATAL master.HMaster - Not starting HMaster
> > because:
> > org.apache.hadoop.security.AccessControlException:
> > org.apache.hadoop.security.AccessControlException: Permission
denied:
> > user=hbase, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
> >        at
> > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > Method)
> >        at
> > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruc
> > to
> > rA
> > ccessorImpl.java:39)
> >        at
> > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Delegating
> > Co
> > ns
> > tructorAccessorImpl.java:27)
> >        at
> > java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >        at
> > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExc
> > ep
> > ti
> > on.java:96)
> >        at
> > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteEx
> > ce
> > pt
> > ion.java:58)
> >        at
org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:910)
> >        at
> > org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileS
> > ys
> > te
> > m.java:262)
> >        at
org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1115)
> >        at
> > org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:205)
> >        at
> > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > Method)
> >        at
> > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruc
> > to
> > rA
> > ccessorImpl.java:39)
> >        at
> > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Delegating
> > Co
> > ns
> > tructorAccessorImpl.java:27)
> >        at
> > java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >        at
> > org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1241)
> >        at
> > org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1282)
> >
> > This message, including any attachments, is the property of Sears 
> > Holdings Corporation and/or one of its subsidiaries. It is 
> > confidential and may contain proprietary or legally privileged 
> > information. If you are not the intended recipient, please delete it

> > without reading the contents. Thank you.
> >
>
> This message, including any attachments, is the property of Sears 
> Holdings Corporation and/or one of its subsidiaries. It is 
> confidential and may contain proprietary or legally privileged 
> information. If you are not the intended recipient, please delete it 
> without reading the contents. Thank you.
>

This message, including any attachments, is the property of Sears Holdings Corporation and/or one of its subsidiaries. It is confidential and may contain proprietary or legally privileged information. If you are not the intended recipient, please delete it without reading the contents. Thank you.

Re: Not Starting Hmaster bacause of Permission denied

Posted by Sonal Goyal <so...@gmail.com>.
Hi Andy,

We run HBase under the same user, so unfortunately I havent seen your issue.
But the hbase user definitely needs to own the hbase.rootdir, to be able to
write to it. Do your hbase and hadoop users belong to the same group and is
the activity dfs folder permissions set correctly for hbase user to write to
it?

I would try the following to rule out permission issues.

bin/hadoop fs -mkdir /activity
bin/hadoop fs -chown hbase /activity

And then start hbase.

If anyone on the list has seen this issue or has some other thoughts, please
share.

Best Regards,
Sonal
Crux: Reporting for HBase <https://github.com/sonalgoyal/crux>
Nube Technologies <http://www.nubetech.co>

<http://in.linkedin.com/in/sonalgoyal>





On Fri, Aug 19, 2011 at 8:57 PM, Zhong, Andy <Sh...@searshc.com>wrote:

> Sonal,
>
> Humm... Thanks for your reply! But not aware of dfs security is being
> used here (Hadoop0.20.2 + Hbase 0.20.3). Is this the top level directory
> related to the following the hbase rootdir, and are you suggest to
> create the following?
> bin/hadoop fs -mkdir /activity/hbase
> bin/hadoop fs -chown hbase /activity/hbase
>
> Below is the definition of the root dir:
> <name>hbase.rootdir</name>
> <value>hdfs://hadoopStressServer/activity/hbase</value>
>
>
> Regards,
> Andy Zhong
>
> -----Original Message-----
> From: Sonal Goyal [mailto:sonalgoyal4@gmail.com]
> Sent: Friday, August 19, 2011 3:10 AM
> To: user@hbase.apache.org
> Subject: Re: Not Starting Hmaster bacause of Permission denied
>
> Hi Andy,
>
> I guess you are using dfs security, in which case your hbase user does
> not have the permission to create the top level directory /hbase in the
> dfs. Can you try the following and then start your master? Let us know
> how it goes.
>
> bin/hadoop fs -mkdir /hbase
> bin/hadoop fs -chown hbase /hbase
>
>
> Best Regards,
> Sonal
> Crux: Reporting for HBase <https://github.com/sonalgoyal/crux>
> Nube Technologies <http://www.nubetech.co>
>
> <http://in.linkedin.com/in/sonalgoyal>
>
>
>
>
>
> On Fri, Aug 19, 2011 at 1:02 PM, Zhong, Andy
> <Sh...@searshc.com>wrote:
>
> > Hi All,
> > I am facing an issue to start Hmaster bacause of Permission denied,
> > although two region servers seem start properly. I will be much
> > appreciated if any one could help me to figure out the root cause. I
> > have hadoop running under 'hadoop' user, and Hbase running under
> 'hbase'
> > user. Thanks, - Andy Zhong
> >
> >
> > 2011-08-19 01:30:31,205 FATAL master.HMaster - Not starting HMaster
> > because:
> > org.apache.hadoop.security.AccessControlException:
> > org.apache.hadoop.security.AccessControlException: Permission denied:
> > user=hbase, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
> >        at
> > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > Method)
> >        at
> > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructo
> > rA
> > ccessorImpl.java:39)
> >        at
> > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCo
> > ns
> > tructorAccessorImpl.java:27)
> >        at
> > java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >        at
> > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExcep
> > ti
> > on.java:96)
> >        at
> > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExce
> > pt
> > ion.java:58)
> >        at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:910)
> >        at
> > org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSys
> > te
> > m.java:262)
> >        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1115)
> >        at
> > org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:205)
> >        at
> > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > Method)
> >        at
> > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructo
> > rA
> > ccessorImpl.java:39)
> >        at
> > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCo
> > ns
> > tructorAccessorImpl.java:27)
> >        at
> > java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >        at
> > org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1241)
> >        at
> > org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1282)
> >
> > This message, including any attachments, is the property of Sears
> > Holdings Corporation and/or one of its subsidiaries. It is
> > confidential and may contain proprietary or legally privileged
> > information. If you are not the intended recipient, please delete it
> > without reading the contents. Thank you.
> >
>
> This message, including any attachments, is the property of Sears Holdings
> Corporation and/or one of its subsidiaries. It is confidential and may
> contain proprietary or legally privileged information. If you are not the
> intended recipient, please delete it without reading the contents. Thank
> you.
>

RE: Not Starting Hmaster bacause of Permission denied

Posted by "Zhong, Andy" <Sh...@searshc.com>.
Sonal,

Humm... Thanks for your reply! But not aware of dfs security is being
used here (Hadoop0.20.2 + Hbase 0.20.3). Is this the top level directory
related to the following the hbase rootdir, and are you suggest to
create the following?
bin/hadoop fs -mkdir /activity/hbase
bin/hadoop fs -chown hbase /activity/hbase
 
Below is the definition of the root dir:
<name>hbase.rootdir</name>
<value>hdfs://hadoopStressServer/activity/hbase</value> 


Regards,
Andy Zhong

-----Original Message-----
From: Sonal Goyal [mailto:sonalgoyal4@gmail.com] 
Sent: Friday, August 19, 2011 3:10 AM
To: user@hbase.apache.org
Subject: Re: Not Starting Hmaster bacause of Permission denied

Hi Andy,

I guess you are using dfs security, in which case your hbase user does
not have the permission to create the top level directory /hbase in the
dfs. Can you try the following and then start your master? Let us know
how it goes.

bin/hadoop fs -mkdir /hbase
bin/hadoop fs -chown hbase /hbase


Best Regards,
Sonal
Crux: Reporting for HBase <https://github.com/sonalgoyal/crux>
Nube Technologies <http://www.nubetech.co>

<http://in.linkedin.com/in/sonalgoyal>





On Fri, Aug 19, 2011 at 1:02 PM, Zhong, Andy
<Sh...@searshc.com>wrote:

> Hi All,
> I am facing an issue to start Hmaster bacause of Permission denied, 
> although two region servers seem start properly. I will be much 
> appreciated if any one could help me to figure out the root cause. I 
> have hadoop running under 'hadoop' user, and Hbase running under
'hbase'
> user. Thanks, - Andy Zhong
>
>
> 2011-08-19 01:30:31,205 FATAL master.HMaster - Not starting HMaster
> because:
> org.apache.hadoop.security.AccessControlException:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=hbase, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
>        at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>        at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructo
> rA
> ccessorImpl.java:39)
>        at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCo
> ns
> tructorAccessorImpl.java:27)
>        at
> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>        at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExcep
> ti
> on.java:96)
>        at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExce
> pt
> ion.java:58)
>        at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:910)
>        at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSys
> te
> m.java:262)
>        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1115)
>        at
> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:205)
>        at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>        at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructo
> rA
> ccessorImpl.java:39)
>        at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCo
> ns
> tructorAccessorImpl.java:27)
>        at
> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>        at
> org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1241)
>        at
> org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1282)
>
> This message, including any attachments, is the property of Sears 
> Holdings Corporation and/or one of its subsidiaries. It is 
> confidential and may contain proprietary or legally privileged 
> information. If you are not the intended recipient, please delete it 
> without reading the contents. Thank you.
>

This message, including any attachments, is the property of Sears Holdings Corporation and/or one of its subsidiaries. It is confidential and may contain proprietary or legally privileged information. If you are not the intended recipient, please delete it without reading the contents. Thank you.

Re: Not Starting Hmaster bacause of Permission denied

Posted by Sonal Goyal <so...@gmail.com>.
Hi Andy,

I guess you are using dfs security, in which case your hbase user does not
have the permission to create the top level directory /hbase in the dfs. Can
you try the following and then start your master? Let us know how it goes.

bin/hadoop fs -mkdir /hbase
bin/hadoop fs -chown hbase /hbase


Best Regards,
Sonal
Crux: Reporting for HBase <https://github.com/sonalgoyal/crux>
Nube Technologies <http://www.nubetech.co>

<http://in.linkedin.com/in/sonalgoyal>





On Fri, Aug 19, 2011 at 1:02 PM, Zhong, Andy <Sh...@searshc.com>wrote:

> Hi All,
> I am facing an issue to start Hmaster bacause of Permission denied,
> although two region servers seem start properly. I will be much
> appreciated if any one could help me to figure out the root cause. I
> have hadoop running under 'hadoop' user, and Hbase running under 'hbase'
> user. Thanks, - Andy Zhong
>
>
> 2011-08-19 01:30:31,205 FATAL master.HMaster - Not starting HMaster
> because:
> org.apache.hadoop.security.AccessControlException:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=hbase, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
>        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>        at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorA
> ccessorImpl.java:39)
>        at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCons
> tructorAccessorImpl.java:27)
>        at
> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>        at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExcepti
> on.java:96)
>        at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcept
> ion.java:58)
>        at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:910)
>        at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSyste
> m.java:262)
>        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1115)
>        at
> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:205)
>        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>        at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorA
> ccessorImpl.java:39)
>        at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCons
> tructorAccessorImpl.java:27)
>        at
> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>        at
> org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1241)
>        at
> org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1282)
>
> This message, including any attachments, is the property of Sears Holdings
> Corporation and/or one of its subsidiaries. It is confidential and may
> contain proprietary or legally privileged information. If you are not the
> intended recipient, please delete it without reading the contents. Thank
> you.
>

Not Starting Hmaster bacause of Permission denied

Posted by "Zhong, Andy" <Sh...@searshc.com>.
Hi All,
I am facing an issue to start Hmaster bacause of Permission denied,
although two region servers seem start properly. I will be much
appreciated if any one could help me to figure out the root cause. I
have hadoop running under 'hadoop' user, and Hbase running under 'hbase'
user. Thanks, - Andy Zhong


2011-08-19 01:30:31,205 FATAL master.HMaster - Not starting HMaster
because:
org.apache.hadoop.security.AccessControlException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=hbase, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
        at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorA
ccessorImpl.java:39)
        at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCons
tructorAccessorImpl.java:27)
        at
java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExcepti
on.java:96)
        at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcept
ion.java:58)
        at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:910)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSyste
m.java:262)
        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1115)
        at
org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:205)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
        at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorA
ccessorImpl.java:39)
        at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCons
tructorAccessorImpl.java:27)
        at
java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at
org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1241)
        at
org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1282)

This message, including any attachments, is the property of Sears Holdings Corporation and/or one of its subsidiaries. It is confidential and may contain proprietary or legally privileged information. If you are not the intended recipient, please delete it without reading the contents. Thank you.

Re: Avatar namenode?

Posted by lars hofhansl <lh...@yahoo.com>.
I was thinking the same.

In an ideal world, I guess, the namenodes would be quorum based.
Clients would be aware of all the namenodes and fire updates in parallel to all namenodes, and updates not return until N namenodes confirmed the update.


To make it easier one could initially require that all namenodes need to receive the update and only one namenode would serve lookups.

When namenodes come up they somehow need to synchronize with each other. (waving hands here)


When a namenode fails there would still be some manual intervention to remove it from the quorum, or to switch to another one if this was the main namenode.


This would put twice the traffic on the network (w.r.t. to namenode traffic), slow the clients to the speed to the slowest namenode, etc. etc.

And nothing is ever simple in distributed computing; there will be a 1000 corner cases to consider.
The NFS approach will at least work, and it is easier to reason about the state the system is in.


-- Lars



________________________________
From: Ryan Rawson <ry...@gmail.com>
To: user@hbase.apache.org
Cc: hadoop-user@lucene.apache.org
Sent: Thursday, August 18, 2011 9:57 PM
Subject: Re: Avatar namenode?

There are a few problems for Avatar node which would prevent me from
ever using it:

- assumption of highly available NFS, this would typically mean
specialized hardware
- failover time is potentially lengthy (article says 60 seconds), and
HBase regionservers might fail

It's an interesting hack, I hear it was only 800 loc, but I'm not sure
if you are not Dhruba you can run it.

On Thu, Aug 18, 2011 at 9:49 PM, Jack Levin <ma...@gmail.com> wrote:
> I don't think there is anyone except Facebook actually uses it.  Their
> case is special, as they have millions and millions of files in HDFS.
>
> -Jack
>
> On Thu, Aug 11, 2011 at 11:19 AM, shanmuganathan.r
> <sh...@zohocorp.com> wrote:
>>
>> Hi All,
>>
>>      I am running the HBase distributed mode in seven node cluster with backup master. The HBase is running properly in the backup master environment. I want to run this HBase on top of the High Availability Hadoop. I saw about Avatar node in the following link http://hadoopblog.blogspot.com/2010/02/hadoop-namenode-high-availability.html .
>>      I need more help regarding this Avatar namenode configuration.
>>
>>
>> 1. Which IP will be given to the Datanoe fs.default.name ?
>> 2. Is there any good method other than avatar available for Backup namenode ?
>>
>>
>>
>> Regards,
>>
>> Shanmuganathan
>>
>>
>>
>

Re: Avatar namenode?

Posted by lars hofhansl <lh...@yahoo.com>.
I was thinking the same.

In an ideal world, I guess, the namenodes would be quorum based.
Clients would be aware of all the namenodes and fire updates in parallel to all namenodes, and updates not return until N namenodes confirmed the update.


To make it easier one could initially require that all namenodes need to receive the update and only one namenode would serve lookups.

When namenodes come up they somehow need to synchronize with each other. (waving hands here)


When a namenode fails there would still be some manual intervention to remove it from the quorum, or to switch to another one if this was the main namenode.


This would put twice the traffic on the network (w.r.t. to namenode traffic), slow the clients to the speed to the slowest namenode, etc. etc.

And nothing is ever simple in distributed computing; there will be a 1000 corner cases to consider.
The NFS approach will at least work, and it is easier to reason about the state the system is in.


-- Lars



________________________________
From: Ryan Rawson <ry...@gmail.com>
To: user@hbase.apache.org
Cc: hadoop-user@lucene.apache.org
Sent: Thursday, August 18, 2011 9:57 PM
Subject: Re: Avatar namenode?

There are a few problems for Avatar node which would prevent me from
ever using it:

- assumption of highly available NFS, this would typically mean
specialized hardware
- failover time is potentially lengthy (article says 60 seconds), and
HBase regionservers might fail

It's an interesting hack, I hear it was only 800 loc, but I'm not sure
if you are not Dhruba you can run it.

On Thu, Aug 18, 2011 at 9:49 PM, Jack Levin <ma...@gmail.com> wrote:
> I don't think there is anyone except Facebook actually uses it.  Their
> case is special, as they have millions and millions of files in HDFS.
>
> -Jack
>
> On Thu, Aug 11, 2011 at 11:19 AM, shanmuganathan.r
> <sh...@zohocorp.com> wrote:
>>
>> Hi All,
>>
>>      I am running the HBase distributed mode in seven node cluster with backup master. The HBase is running properly in the backup master environment. I want to run this HBase on top of the High Availability Hadoop. I saw about Avatar node in the following link http://hadoopblog.blogspot.com/2010/02/hadoop-namenode-high-availability.html .
>>      I need more help regarding this Avatar namenode configuration.
>>
>>
>> 1. Which IP will be given to the Datanoe fs.default.name ?
>> 2. Is there any good method other than avatar available for Backup namenode ?
>>
>>
>>
>> Regards,
>>
>> Shanmuganathan
>>
>>
>>
>

Re: Avatar namenode?

Posted by Ryan Rawson <ry...@gmail.com>.
There are a few problems for Avatar node which would prevent me from
ever using it:

- assumption of highly available NFS, this would typically mean
specialized hardware
- failover time is potentially lengthy (article says 60 seconds), and
HBase regionservers might fail

It's an interesting hack, I hear it was only 800 loc, but I'm not sure
if you are not Dhruba you can run it.

On Thu, Aug 18, 2011 at 9:49 PM, Jack Levin <ma...@gmail.com> wrote:
> I don't think there is anyone except Facebook actually uses it.  Their
> case is special, as they have millions and millions of files in HDFS.
>
> -Jack
>
> On Thu, Aug 11, 2011 at 11:19 AM, shanmuganathan.r
> <sh...@zohocorp.com> wrote:
>>
>> Hi All,
>>
>>      I am running the HBase distributed mode in seven node cluster with backup master. The HBase is running properly in the backup master environment. I want to run this HBase on top of the High Availability Hadoop. I saw about Avatar node in the following link http://hadoopblog.blogspot.com/2010/02/hadoop-namenode-high-availability.html .
>>      I need more help regarding this Avatar namenode configuration.
>>
>>
>> 1. Which IP will be given to the Datanoe fs.default.name ?
>> 2. Is there any good method other than avatar available for Backup namenode ?
>>
>>
>>
>> Regards,
>>
>> Shanmuganathan
>>
>>
>>
>

Re: Avatar namenode?

Posted by Ryan Rawson <ry...@gmail.com>.
There are a few problems for Avatar node which would prevent me from
ever using it:

- assumption of highly available NFS, this would typically mean
specialized hardware
- failover time is potentially lengthy (article says 60 seconds), and
HBase regionservers might fail

It's an interesting hack, I hear it was only 800 loc, but I'm not sure
if you are not Dhruba you can run it.

On Thu, Aug 18, 2011 at 9:49 PM, Jack Levin <ma...@gmail.com> wrote:
> I don't think there is anyone except Facebook actually uses it.  Their
> case is special, as they have millions and millions of files in HDFS.
>
> -Jack
>
> On Thu, Aug 11, 2011 at 11:19 AM, shanmuganathan.r
> <sh...@zohocorp.com> wrote:
>>
>> Hi All,
>>
>>      I am running the HBase distributed mode in seven node cluster with backup master. The HBase is running properly in the backup master environment. I want to run this HBase on top of the High Availability Hadoop. I saw about Avatar node in the following link http://hadoopblog.blogspot.com/2010/02/hadoop-namenode-high-availability.html .
>>      I need more help regarding this Avatar namenode configuration.
>>
>>
>> 1. Which IP will be given to the Datanoe fs.default.name ?
>> 2. Is there any good method other than avatar available for Backup namenode ?
>>
>>
>>
>> Regards,
>>
>> Shanmuganathan
>>
>>
>>
>

Re: Avatar namenode?

Posted by Jack Levin <ma...@gmail.com>.
I don't think there is anyone except Facebook actually uses it.  Their
case is special, as they have millions and millions of files in HDFS.

-Jack

On Thu, Aug 11, 2011 at 11:19 AM, shanmuganathan.r
<sh...@zohocorp.com> wrote:
>
> Hi All,
>
>      I am running the HBase distributed mode in seven node cluster with backup master. The HBase is running properly in the backup master environment. I want to run this HBase on top of the High Availability Hadoop. I saw about Avatar node in the following link http://hadoopblog.blogspot.com/2010/02/hadoop-namenode-high-availability.html .
>      I need more help regarding this Avatar namenode configuration.
>
>
> 1. Which IP will be given to the Datanoe fs.default.name ?
> 2. Is there any good method other than avatar available for Backup namenode ?
>
>
>
> Regards,
>
> Shanmuganathan
>
>
>

Re: Avatar namenode?

Posted by Prashant <pr...@imaginea.com>.
Namenode HA is still not available in open source. Please see 
https://issues.apache.org/jira/browse/HDFS-1623 
<https://issues.apache.org/jira/browse/HDFS-1623>

-prashant
On 08/11/2011 11:49 PM, shanmuganathan.r wrote:
> Hi All,
>
>        I am running the HBase distributed mode in seven node cluster with backup master. The HBase is running properly in the backup master environment. I want to run this HBase on top of the High Availability Hadoop. I saw about Avatar node in the following link http://hadoopblog.blogspot.com/2010/02/hadoop-namenode-high-availability.html .
>        I need more help regarding this Avatar namenode configuration.
>
>
> 1. Which IP will be given to the Datanoe fs.default.name ?
> 2. Is there any good method other than avatar available for Backup namenode ?
>
>
>
> Regards,
>
> Shanmuganathan
>
>
>


Re: Avatar namenode?

Posted by Jack Levin <ma...@gmail.com>.
I don't think there is anyone except Facebook actually uses it.  Their
case is special, as they have millions and millions of files in HDFS.

-Jack

On Thu, Aug 11, 2011 at 11:19 AM, shanmuganathan.r
<sh...@zohocorp.com> wrote:
>
> Hi All,
>
>      I am running the HBase distributed mode in seven node cluster with backup master. The HBase is running properly in the backup master environment. I want to run this HBase on top of the High Availability Hadoop. I saw about Avatar node in the following link http://hadoopblog.blogspot.com/2010/02/hadoop-namenode-high-availability.html .
>      I need more help regarding this Avatar namenode configuration.
>
>
> 1. Which IP will be given to the Datanoe fs.default.name ?
> 2. Is there any good method other than avatar available for Backup namenode ?
>
>
>
> Regards,
>
> Shanmuganathan
>
>
>