You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Stas Oskin <st...@gmail.com> on 2009/08/17 16:37:26 UTC

Non-root user blocks root on DFS?

Hi.

I have a directory created by the root user with the 777 permissions.

When an application running under non-root user (called dev1) createded
sub-directories in this directory, it made some directories with 777, and
some with 755. This causes the app launched under root user not being able
to erase files from these directories, and throwing the following
exceptions:

org.apache.hadoop.fs.permission.AccessControlException:
org.apache.hadoop.fs.permission.AccessControlException: Permission denied:
user=root, access=WRITE, inode="snapshots":dev1:supergroup:rwxr-xr-x
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:52)
 at org.apache.hadoop.dfs.DFSClient.delete(DFSClient.java:530)
at
org.apache.hadoop.dfs.DistributedFileSystem.delete(DistributedFileSystem.java:210)
 at org.util.FileUtils.deleteFile(FileUtils.java:365)
Caused by: org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.fs.permission.AccessControlException: Permission denied:
user=root, access=WRITE, inode="snapshots":dev1:supergroup:rwxr-xr-x at
org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:175)
at org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java:156)
 at
org.apache.hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java:107)
at
org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java:4238)
 at
org.apache.hadoop.dfs.FSNamesystem.deleteInternal(FSNamesystem.java:1527)
at org.apache.hadoop.dfs.FSNamesystem.delete(FSNamesystem.java:1497)
 at org.apache.hadoop.dfs.NameNode.delete(NameNode.java:425)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
 at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:890)

at org.apache.hadoop.ipc.Client.call(Client.java:716)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
 at org.apache.hadoop.dfs.$Proxy17.delete(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
 at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at org.apache.hadoop.dfs.$Proxy17.delete(Unknown Source)
 at org.apache.hadoop.dfs.DFSClient.delete(DFSClient.java:528)
... 5 more



This brings the following questions:

1) Is the root user considered same level as non-root user in DFS?
2) Any idea why the dev1 created some directories with 777, and some with
755, even that their root directory was 777?

There is no any particular code in the application, which may set the
directory permissions.

Thanks for any idea.

Re: Non-root user blocks root on DFS?

Posted by Stas Oskin <st...@gmail.com>.
Hi Brian.

2009/8/17 Brian Bockelman <bb...@cse.unl.edu>

> Hey Stas,
>
> IIRC, the user with "special privileges" in Hadoop is the superuser.  By
> default, the superuser is the user who runs the Hadoop Namenode.
>

I think you right, because I need to su to hadoop in order to use the shell.


>
> So, if the user "hadoop" runs the Hadoop Namenode, then "hadoop" is the
> superuser who has the permissions normally given to root in Unix.  I don't
> remember if there is a way to set the superuser manually.
>

I see, it's clear now.


>
> Am I correct in guessing that the namenode is running as non-root?
>

You completely right.

So this explains why "root" unable to erase the files created by "dev1".


The question remain, how "dev1" created file permissions 755, while the
parent directory was with 777 permissions?

Also, is there a way to make non-superuser user a super-user?

Regards.

Re: Non-root user blocks root on DFS?

Posted by Brian Bockelman <bb...@cse.unl.edu>.
Hey Stas,

IIRC, the user with "special privileges" in Hadoop is the superuser.   
By default, the superuser is the user who runs the Hadoop Namenode.

So, if the user "hadoop" runs the Hadoop Namenode, then "hadoop" is  
the superuser who has the permissions normally given to root in Unix.   
I don't remember if there is a way to set the superuser manually.

Am I correct in guessing that the namenode is running as non-root?

Brian

On Aug 17, 2009, at 9:37 AM, Stas Oskin wrote:

> Hi.
>
> I have a directory created by the root user with the 777 permissions.
>
> When an application running under non-root user (called dev1)  
> createded
> sub-directories in this directory, it made some directories with  
> 777, and
> some with 755. This causes the app launched under root user not  
> being able
> to erase files from these directories, and throwing the following
> exceptions:
>
> org.apache.hadoop.fs.permission.AccessControlException:
> org.apache.hadoop.fs.permission.AccessControlException: Permission  
> denied:
> user=root, access=WRITE, inode="snapshots":dev1:supergroup:rwxr-xr-x
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native  
> Method)
> at
> sun 
> .reflect 
> .NativeConstructorAccessorImpl 
> .newInstance(NativeConstructorAccessorImpl.java:39)
> at
> sun 
> .reflect 
> .DelegatingConstructorAccessorImpl 
> .newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at
> org 
> .apache 
> .hadoop 
> .ipc.RemoteException.instantiateException(RemoteException.java:90)
> at
> org 
> .apache 
> .hadoop 
> .ipc.RemoteException.unwrapRemoteException(RemoteException.java:52)
> at org.apache.hadoop.dfs.DFSClient.delete(DFSClient.java:530)
> at
> org 
> .apache 
> .hadoop.dfs.DistributedFileSystem.delete(DistributedFileSystem.java: 
> 210)
> at org.util.FileUtils.deleteFile(FileUtils.java:365)
> Caused by: org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.fs.permission.AccessControlException: Permission  
> denied:
> user=root, access=WRITE, inode="snapshots":dev1:supergroup:rwxr-xr-x  
> at
> org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java: 
> 175)
> at  
> org.apache.hadoop.dfs.PermissionChecker.check(PermissionChecker.java: 
> 156)
> at
> org 
> .apache 
> .hadoop.dfs.PermissionChecker.checkPermission(PermissionChecker.java: 
> 107)
> at
> org.apache.hadoop.dfs.FSNamesystem.checkPermission(FSNamesystem.java: 
> 4238)
> at
> org.apache.hadoop.dfs.FSNamesystem.deleteInternal(FSNamesystem.java: 
> 1527)
> at org.apache.hadoop.dfs.FSNamesystem.delete(FSNamesystem.java:1497)
> at org.apache.hadoop.dfs.NameNode.delete(NameNode.java:425)
> at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
> at
> sun 
> .reflect 
> .DelegatingMethodAccessorImpl 
> .invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:890)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:716)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
> at org.apache.hadoop.dfs.$Proxy17.delete(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun 
> .reflect 
> .NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun 
> .reflect 
> .DelegatingMethodAccessorImpl 
> .invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org 
> .apache 
> .hadoop 
> .io 
> .retry 
> .RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at
> org 
> .apache 
> .hadoop 
> .io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java: 
> 59)
> at org.apache.hadoop.dfs.$Proxy17.delete(Unknown Source)
> at org.apache.hadoop.dfs.DFSClient.delete(DFSClient.java:528)
> ... 5 more
>
>
>
> This brings the following questions:
>
> 1) Is the root user considered same level as non-root user in DFS?
> 2) Any idea why the dev1 created some directories with 777, and some  
> with
> 755, even that their root directory was 777?
>
> There is no any particular code in the application, which may set the
> directory permissions.
>
> Thanks for any idea.