You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Rick Hangartner <ha...@strands.com> on 2008/05/07 19:33:56 UTC

Running hBase in a different user account from hadoop

Hi,

First let me complement the hbase developers for their work in  
offering this very useful tool to the world.

We are running hbase-0.1.1 on top of hadoop-0.16.3, starting the hbase  
daemon from an "hbase" user account and the hadoop daemon and have  
observed this "feature".   We are running hbase in a separate "hadoop"  
user account and hadoop in it's own "hadoop" user account on a single  
machine.

When we try to start up hbase, we see this error message in the log:

2008-05-06 12:09:02,845 ERROR org.apache.hadoop.hbase.HMaster: Can not  
start master
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native  
Method)
	at  
sun 
.reflect 
.NativeConstructorAccessorImpl 
.newInstance(NativeConstructorAccessorImpl.java:39)
	at  
sun 
.reflect 
.DelegatingConstructorAccessorImpl 
.newInstance(DelegatingConstructorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:494)
	at org.apache.hadoop.hbase.HMaster.doMain(HMaster.java:3329)
	at org.apache.hadoop.hbase.HMaster.main(HMaster.java:3363)
Caused by: org.apache.hadoop.ipc.RemoteException:  
org.apache.hadoop.fs.permission.AccessControlException: Superuser  
privilege is required
         ... (etc)

We get this no matter whether the "hadoop-site.xml" includes any one  
of these properties, or none of them:

   <property>
     <name>dfs.web.ugi</name>
     <value>webuser,webgroup</value>
     <final>true</final>
   </property>

   <property>
     <name>dfs.web.ugi</name>
     <value>webuser,supergroup</value>
     <final>true</final>
   </property>

   <property>
     <name>dfs.web.ugi</name>
     <value>hadoop,supergroup</value>
     <final>true</final>
   </property>

On the other hand, if we run hbase in the hadoop user account instead  
of it's own user account, we have no problems.

Finally, if we include this property in the "hadoop-site.xml" file to  
turn off the hdfs file system permission feature:

   <property>
     <name>dfs.permissions</name>
     <value>false</value>
     <final>true</final>
   </property>

we can start up hbase in it's own user account or in the hadoop user  
account.

This almost seems like it has something to do with interaction between  
Linux ext3 user and group permissions and the hdfs user and group  
permissions system.  We may have missed comments in the mail lists,  
FAQs or elsewhere addressing this problem, our apologies if we have.

Congrats and thanks again on a very nice system.

Re: Running hBase in a different user account from hadoop

Posted by stack <st...@duboce.net>.
Rick:

Would suggest you also hang this question out on the hadoop-user mailing 
list.  The fellas who know permissions are more likely to see it there 
I'd say (Thanks for digging in on this one).

St.Ack


Rick Hangartner wrote:
> Hi, we think we've narrowed the issue down a bit from the debug logs.
>
> The method "FSNameSystem.checkPermission()" method is throwing the 
> exception because the "PermissionChecker()" constructor is returning 
> that the hbase user is not a superuser or in the same supergroup as 
> hadoop.
>
>   private void checkSuperuserPrivilege() throws AccessControlException {
>     if (isPermissionEnabled) {
>       PermissionChecker pc = new PermissionChecker(
>           fsOwner.getUserName(), supergroup);
>       if (!pc.isSuper) {
>         throw new AccessControlException("Superuser privilege is 
> required");
>       }
>     }
>   }
>
> If we look at at the "PermissionChecker()" constructor we see that it 
> is comparing the hdfs owner name (which should be "hadoop") and the 
> hdfs file system owner's group ("supergroup") to the current user and 
> groups, which the log seems to indicate the user is "hbase" and the 
> groups for user "hbase" only include "hbase" :
>
>   PermissionChecker(String fsOwner, String supergroup
>       ) throws AccessControlException{
>     UserGroupInformation ugi = UserGroupInformation.getCurrentUGI();
>     if (LOG.isDebugEnabled()) {
>       LOG.debug("ugi=" + ugi);
>     }
>
>     if (ugi != null) {
>       user = ugi.getUserName();
>       groups.addAll(Arrays.asList(ugi.getGroupNames()));
>       isSuper = user.equals(fsOwner) || groups.contains(supergroup);
>     }
>     else {
>       throw new AccessControlException("ugi = null");
>     }
>   }
>
> The current user and group is derived from the thread information:
>
>   private static final ThreadLocal<UserGroupInformation> currentUGI
>     = new ThreadLocal<UserGroupInformation>();
>
>   /** @return the {@link UserGroupInformation} for the current thread */
>   public static UserGroupInformation getCurrentUGI() {
>     return currentUGI.get();
>   }
>
> which we're hoping might be enough to illuminate the problem.
>
> One question this raises is if the "hbase:hbase" user and group are 
> being derived from the Linux file system user and group, or if they 
> are the hdfs user and group?
>
> Otherwise, how can we indicate that "hbase" user is in the hdfs group 
> "supergroup"? Is there a parameter in a hadoop configuration file?  
> Apparently setting the groups of the web server to include 
> "supergroup" didn't have any effect, although perhaps that could be 
> for some other reason?
>
> Thanks very much for any insights.  Incidentally we are now running 
> hbase-0.1.2.
> Rick
>
>
> On May 7, 2008, at 1:20 PM, stack wrote:
>
>> Rick Hangartner wrote:
>>> 1.  By "hbase rootdir", you mean "/hbase" and not a "/user/hbase" 
>>> directory in the hdfs, correct?
>>
>> Yes.  hbase.rootdir.
>>
>>> 2.  When you suggest we move to the head of the 0.1 branch, do you 
>>> mean an 0.1.2 pre-release since right now all the servers we check 
>>> show hbase-0.1.1 as the latest release?
>>
>> Yes.  We put up a 0.1.2 candidate a few weeks ago but a bunch of bugs 
>> came in so we put it aside.  I'm about to put up a new 0.1.2 
>> candidate now.  Watch this list for an update in the next hour or so.
>>
>> Thanks,
>> St.Ack
>
>


Re: Running hBase in a different user account from hadoop

Posted by Rick Hangartner <ha...@strands.com>.
Hi, we think we've narrowed the issue down a bit from the debug logs.

The method "FSNameSystem.checkPermission()" method is throwing the  
exception because the "PermissionChecker()" constructor is returning  
that the hbase user is not a superuser or in the same supergroup as  
hadoop.

   private void checkSuperuserPrivilege() throws  
AccessControlException {
     if (isPermissionEnabled) {
       PermissionChecker pc = new PermissionChecker(
           fsOwner.getUserName(), supergroup);
       if (!pc.isSuper) {
         throw new AccessControlException("Superuser privilege is  
required");
       }
     }
   }

If we look at at the "PermissionChecker()" constructor we see that it  
is comparing the hdfs owner name (which should be "hadoop") and the  
hdfs file system owner's group ("supergroup") to the current user and  
groups, which the log seems to indicate the user is "hbase" and the  
groups for user "hbase" only include "hbase" :

   PermissionChecker(String fsOwner, String supergroup
       ) throws AccessControlException{
     UserGroupInformation ugi = UserGroupInformation.getCurrentUGI();
     if (LOG.isDebugEnabled()) {
       LOG.debug("ugi=" + ugi);
     }

     if (ugi != null) {
       user = ugi.getUserName();
       groups.addAll(Arrays.asList(ugi.getGroupNames()));
       isSuper = user.equals(fsOwner) || groups.contains(supergroup);
     }
     else {
       throw new AccessControlException("ugi = null");
     }
   }

The current user and group is derived from the thread information:

   private static final ThreadLocal<UserGroupInformation> currentUGI
     = new ThreadLocal<UserGroupInformation>();

   /** @return the {@link UserGroupInformation} for the current thread  
*/
   public static UserGroupInformation getCurrentUGI() {
     return currentUGI.get();
   }

which we're hoping might be enough to illuminate the problem.

One question this raises is if the "hbase:hbase" user and group are  
being derived from the Linux file system user and group, or if they  
are the hdfs user and group?

Otherwise, how can we indicate that "hbase" user is in the hdfs group  
"supergroup"? Is there a parameter in a hadoop configuration file?   
Apparently setting the groups of the web server to include  
"supergroup" didn't have any effect, although perhaps that could be  
for some other reason?

Thanks very much for any insights.  Incidentally we are now running  
hbase-0.1.2.
Rick


On May 7, 2008, at 1:20 PM, stack wrote:

> Rick Hangartner wrote:
>> 1.  By "hbase rootdir", you mean "/hbase" and not a "/user/hbase"  
>> directory in the hdfs, correct?
>
> Yes.  hbase.rootdir.
>
>> 2.  When you suggest we move to the head of the 0.1 branch, do you  
>> mean an 0.1.2 pre-release since right now all the servers we check  
>> show hbase-0.1.1 as the latest release?
>
> Yes.  We put up a 0.1.2 candidate a few weeks ago but a bunch of  
> bugs came in so we put it aside.  I'm about to put up a new 0.1.2  
> candidate now.  Watch this list for an update in the next hour or so.
>
> Thanks,
> St.Ack


Re: Running hBase in a different user account from hadoop

Posted by stack <st...@duboce.net>.
Rick Hangartner wrote:
> 1.  By "hbase rootdir", you mean "/hbase" and not a "/user/hbase" 
> directory in the hdfs, correct?

Yes.  hbase.rootdir.

> 2.  When you suggest we move to the head of the 0.1 branch, do you 
> mean an 0.1.2 pre-release since right now all the servers we check 
> show hbase-0.1.1 as the latest release?

Yes.  We put up a 0.1.2 candidate a few weeks ago but a bunch of bugs 
came in so we put it aside.  I'm about to put up a new 0.1.2 candidate 
now.  Watch this list for an update in the next hour or so.

Thanks,
St.Ack

Re: Running hBase in a different user account from hadoop

Posted by Rick Hangartner <ha...@strands.com>.
Thanks for that extra info.  The suggestion that we turn permissions  
probably what we'll do, but for our education and the benefit of the  
list maybe we'll poke at this a bit yet and see what we can get to work.

Just a couple of further questions:

1.  By "hbase rootdir", you mean "/hbase" and not a "/user/hbase"  
directory in the hdfs, correct?

We first changed the owner of "/hbase" to "hbase" from "hadoop" and  
that didn't help.  We then set the permissions on "/hbase" to 775 and  
777 with owner "hbase", and with owner "hadoop" and that didn't help  
either.

We also tried changing the Linux file system permissions to 775 and  
777 and that also didn't seem to make any difference.

We'll keep at it.

2.  When you suggest we move to the head of the 0.1 branch, do you  
mean an 0.1.2 pre-release since right now all the servers we check  
show hbase-0.1.1 as the latest release?

Your quite welcome for the nod at the good work you've done.


On May 7, 2008, at 10:46 AM, stack wrote:

> Is the hbase rootdir writable by the 'hbase' user? (Not from me -- I  
> just turn off permissions on hadoop clusters -- but from our  
> 'permissions' fella).
>
>
> Would also suggest your mileage will be better if you move to the  
> head of the 0.1 branch.  There should be a new 0.1.2 release  
> canddiate available soon if you want to wait on that instead.
>
> Oh, and thanks for the compliments.
>
> St.Ack
>
>
> Rick Hangartner wrote:
>> Hi,
>>
>> First let me complement the hbase developers for their work in  
>> offering this very useful tool to the world.
>>
>> We are running hbase-0.1.1 on top of hadoop-0.16.3, starting the  
>> hbase daemon from an "hbase" user account and the hadoop daemon and  
>> have observed this "feature".   We are running hbase in a separate  
>> "hadoop" user account and hadoop in it's own "hadoop" user account  
>> on a single machine.
>>
>> When we try to start up hbase, we see this error message in the log:
>>
>> 2008-05-06 12:09:02,845 ERROR org.apache.hadoop.hbase.HMaster: Can  
>> not start master
>> java.lang.reflect.InvocationTargetException
>>    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native  
>> Method)
>>    at  
>> sun 
>> .reflect 
>> .NativeConstructorAccessorImpl 
>> .newInstance(NativeConstructorAccessorImpl.java:39)
>>    at  
>> sun 
>> .reflect 
>> .DelegatingConstructorAccessorImpl 
>> .newInstance(DelegatingConstructorAccessorImpl.java:27)
>>    at java.lang.reflect.Constructor.newInstance(Constructor.java:494)
>>    at org.apache.hadoop.hbase.HMaster.doMain(HMaster.java:3329)
>>    at org.apache.hadoop.hbase.HMaster.main(HMaster.java:3363)
>> Caused by: org.apache.hadoop.ipc.RemoteException:  
>> org.apache.hadoop.fs.permission.AccessControlException: Superuser  
>> privilege is required
>>        ... (etc)
>>
>> We get this no matter whether the "hadoop-site.xml" includes any  
>> one of these properties, or none of them:
>>
>>  <property>
>>    <name>dfs.web.ugi</name>
>>    <value>webuser,webgroup</value>
>>    <final>true</final>
>>  </property>
>>
>>  <property>
>>    <name>dfs.web.ugi</name>
>>    <value>webuser,supergroup</value>
>>    <final>true</final>
>>  </property>
>>
>>  <property>
>>    <name>dfs.web.ugi</name>
>>    <value>hadoop,supergroup</value>
>>    <final>true</final>
>>  </property>
>>
>> On the other hand, if we run hbase in the hadoop user account  
>> instead of it's own user account, we have no problems.
>>
>> Finally, if we include this property in the "hadoop-site.xml" file  
>> to turn off the hdfs file system permission feature:
>>
>>  <property>
>>    <name>dfs.permissions</name>
>>    <value>false</value>
>>    <final>true</final>
>>  </property>
>>
>> we can start up hbase in it's own user account or in the hadoop  
>> user account.
>>
>> This almost seems like it has something to do with interaction  
>> between Linux ext3 user and group permissions and the hdfs user and  
>> group permissions system.  We may have missed comments in the mail  
>> lists, FAQs or elsewhere addressing this problem, our apologies if  
>> we have.
>>
>> Congrats and thanks again on a very nice system.
>


Re: Running hBase in a different user account from hadoop

Posted by stack <st...@duboce.net>.
Is the hbase rootdir writable by the 'hbase' user? (Not from me -- I 
just turn off permissions on hadoop clusters -- but from our 
'permissions' fella).

Would also suggest your mileage will be better if you move to the head 
of the 0.1 branch.  There should be a new 0.1.2 release canddiate 
available soon if you want to wait on that instead.

Oh, and thanks for the compliments.

St.Ack


Rick Hangartner wrote:
> Hi,
>
> First let me complement the hbase developers for their work in 
> offering this very useful tool to the world.
>
> We are running hbase-0.1.1 on top of hadoop-0.16.3, starting the hbase 
> daemon from an "hbase" user account and the hadoop daemon and have 
> observed this "feature".   We are running hbase in a separate "hadoop" 
> user account and hadoop in it's own "hadoop" user account on a single 
> machine.
>
> When we try to start up hbase, we see this error message in the log:
>
> 2008-05-06 12:09:02,845 ERROR org.apache.hadoop.hbase.HMaster: Can not 
> start master
> java.lang.reflect.InvocationTargetException
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) 
>
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) 
>
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:494)
>     at org.apache.hadoop.hbase.HMaster.doMain(HMaster.java:3329)
>     at org.apache.hadoop.hbase.HMaster.main(HMaster.java:3363)
> Caused by: org.apache.hadoop.ipc.RemoteException: 
> org.apache.hadoop.fs.permission.AccessControlException: Superuser 
> privilege is required
>         ... (etc)
>
> We get this no matter whether the "hadoop-site.xml" includes any one 
> of these properties, or none of them:
>
>   <property>
>     <name>dfs.web.ugi</name>
>     <value>webuser,webgroup</value>
>     <final>true</final>
>   </property>
>
>   <property>
>     <name>dfs.web.ugi</name>
>     <value>webuser,supergroup</value>
>     <final>true</final>
>   </property>
>
>   <property>
>     <name>dfs.web.ugi</name>
>     <value>hadoop,supergroup</value>
>     <final>true</final>
>   </property>
>
> On the other hand, if we run hbase in the hadoop user account instead 
> of it's own user account, we have no problems.
>
> Finally, if we include this property in the "hadoop-site.xml" file to 
> turn off the hdfs file system permission feature:
>
>   <property>
>     <name>dfs.permissions</name>
>     <value>false</value>
>     <final>true</final>
>   </property>
>
> we can start up hbase in it's own user account or in the hadoop user 
> account.
>
> This almost seems like it has something to do with interaction between 
> Linux ext3 user and group permissions and the hdfs user and group 
> permissions system.  We may have missed comments in the mail lists, 
> FAQs or elsewhere addressing this problem, our apologies if we have.
>
> Congrats and thanks again on a very nice system.