You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by "Zheng, Kai" <ka...@intel.com> on 2012/09/07 11:20:53 UTC
Why newly created file is of supergroup as the file ownership group
?
1. As follows, the initial ownership group is supergroup, instead of any group of the operation user.
[zk@hadoop-nn ~]$ whoami
zk
[zk@hadoop-nn ~]$ id -Gn
zk ldapgroup
[zk@hadoop-nn ~]$ echo "test hadoop cluster" > hadoop-test
[zk@hadoop-nn ~]$ ls -l hadoop-test
-rw-rw-r--. 1 zk zk 20 Sep 7 04:33 hadoop-test
[zk@hadoop-nn ~]$ hadoop dfs -copyFromLocal hadoop-test /usr/hadoop/tmp/zk/
[zk@hadoop-nn ~]$ hadoop dfs -lsr /usr/hadoop/tmp/zk/
-rw-r--r-- 1 zk supergroup 20 2012-09-07 04:33 /usr/hadoop/tmp/zk/hadoop-test
Relevant hadoop configurations are below.
hadoop.security.authentication --> Kerberos
dfs.permissions --> true
hadoop.security.authorization --> true
2. Looking at the codes, found the behavior is intended.
In 1.0.3, in NameNode.java,
/** {@inheritDoc} */
public void create(String src,
FsPermission masked,
...) throws IOException {
...
namesystem.startFile(src,
new PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserName(),
null, masked),
clientName, clientMachine, overwrite, createParent, replication, blockSize);
..
}
In hadoop trunk,
@Override // ClientProtocol
public void create(String src,
FsPermission masked,
String clientName,
...) throws IOException {
...
namesystem.startFile(src,
new PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserName(),
null, masked),
clientName, clientMachine, flag.get(), createParent, replication, blockSize);
...
}
We can see that null value as the group parameter is given for PermissionStatus(String user, String group, FsPermission permission).
3. So the question is, why Hadoop intends for that, using the default value 'supergroup' as the initial ownership group for newly created files?
If so, how fs permission checker checks permission against the file group? I guess it's not so relevant then.
I'm learning Hadoop, and everything is new and interested. Thanks for answers, corrections, and anything.
Kai
RE: Why newly created file is of supergroup as the file ownership
group ?
Posted by "Zheng, Kai" <ka...@intel.com>.
Thank you Harsh! Seems that I should have read the official docs more carefully.
-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com]
Sent: Friday, September 07, 2012 5:45 PM
To: user@hadoop.apache.org
Subject: Re: Why newly created file is of supergroup as the file ownership group ?
The Hadoop permissions model follows the BSD-style permissions. On http://hadoop.apache.org/common/docs/stable/hdfs_permissions_guide.html
we have this behavior mentioned "When a file or directory is created, its owner is the user identity of the client process, and its group is the group of the parent directory (the BSD rule).".
On Fri, Sep 7, 2012 at 2:50 PM, Zheng, Kai <ka...@intel.com> wrote:
> 1. As follows, the initial ownership group is supergroup, instead of
> any group of the operation user.
>
> [zk@hadoop-nn ~]$ whoami
>
> zk
>
> [zk@hadoop-nn ~]$ id -Gn
>
> zk ldapgroup
>
> [zk@hadoop-nn ~]$ echo "test hadoop cluster" > hadoop-test
>
> [zk@hadoop-nn ~]$ ls -l hadoop-test
>
> -rw-rw-r--. 1 zk zk 20 Sep 7 04:33 hadoop-test
>
> [zk@hadoop-nn ~]$ hadoop dfs -copyFromLocal hadoop-test
> /usr/hadoop/tmp/zk/
>
> [zk@hadoop-nn ~]$ hadoop dfs -lsr /usr/hadoop/tmp/zk/
>
> -rw-r--r-- 1 zk supergroup 20 2012-09-07 04:33
> /usr/hadoop/tmp/zk/hadoop-test
>
>
>
> Relevant hadoop configurations are below.
>
> hadoop.security.authentication --> Kerberos
>
> dfs.permissions --> true
>
> hadoop.security.authorization --> true
>
>
>
> 2. Looking at the codes, found the behavior is intended.
>
> In 1.0.3, in NameNode.java,
>
>
>
> /** {@inheritDoc} */
>
> public void create(String src,
>
> FsPermission masked,
>
> ...) throws IOException {
>
> ...
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserNam
> e(),
>
> null, masked),
>
> clientName, clientMachine, overwrite, createParent,
> replication, blockSize);
>
> ..
>
> }
>
>
>
> In hadoop trunk,
>
>
>
> @Override // ClientProtocol
>
> public void create(String src,
>
> FsPermission masked,
>
> String clientName,
>
> ...) throws IOException {
>
> ...
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserNam
> e(),
>
> null, masked),
>
> clientName, clientMachine, flag.get(), createParent,
> replication, blockSize);
>
> ...
>
> }
>
>
>
> We can see that null value as the group parameter is given for
> PermissionStatus(String user, String group, FsPermission permission).
>
>
>
> 3. So the question is, why Hadoop intends for that, using the default
> value 'supergroup' as the initial ownership group for newly created files?
>
> If so, how fs permission checker checks permission against the file
> group? I guess it's not so relevant then.
>
>
>
> I'm learning Hadoop, and everything is new and interested. Thanks for
> answers, corrections, and anything.
>
>
>
> Kai
>
>
--
Harsh J
RE: Why newly created file is of supergroup as the file ownership
group ?
Posted by "Zheng, Kai" <ka...@intel.com>.
Thank you Harsh! Seems that I should have read the official docs more carefully.
-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com]
Sent: Friday, September 07, 2012 5:45 PM
To: user@hadoop.apache.org
Subject: Re: Why newly created file is of supergroup as the file ownership group ?
The Hadoop permissions model follows the BSD-style permissions. On http://hadoop.apache.org/common/docs/stable/hdfs_permissions_guide.html
we have this behavior mentioned "When a file or directory is created, its owner is the user identity of the client process, and its group is the group of the parent directory (the BSD rule).".
On Fri, Sep 7, 2012 at 2:50 PM, Zheng, Kai <ka...@intel.com> wrote:
> 1. As follows, the initial ownership group is supergroup, instead of
> any group of the operation user.
>
> [zk@hadoop-nn ~]$ whoami
>
> zk
>
> [zk@hadoop-nn ~]$ id -Gn
>
> zk ldapgroup
>
> [zk@hadoop-nn ~]$ echo "test hadoop cluster" > hadoop-test
>
> [zk@hadoop-nn ~]$ ls -l hadoop-test
>
> -rw-rw-r--. 1 zk zk 20 Sep 7 04:33 hadoop-test
>
> [zk@hadoop-nn ~]$ hadoop dfs -copyFromLocal hadoop-test
> /usr/hadoop/tmp/zk/
>
> [zk@hadoop-nn ~]$ hadoop dfs -lsr /usr/hadoop/tmp/zk/
>
> -rw-r--r-- 1 zk supergroup 20 2012-09-07 04:33
> /usr/hadoop/tmp/zk/hadoop-test
>
>
>
> Relevant hadoop configurations are below.
>
> hadoop.security.authentication --> Kerberos
>
> dfs.permissions --> true
>
> hadoop.security.authorization --> true
>
>
>
> 2. Looking at the codes, found the behavior is intended.
>
> In 1.0.3, in NameNode.java,
>
>
>
> /** {@inheritDoc} */
>
> public void create(String src,
>
> FsPermission masked,
>
> ...) throws IOException {
>
> ...
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserNam
> e(),
>
> null, masked),
>
> clientName, clientMachine, overwrite, createParent,
> replication, blockSize);
>
> ..
>
> }
>
>
>
> In hadoop trunk,
>
>
>
> @Override // ClientProtocol
>
> public void create(String src,
>
> FsPermission masked,
>
> String clientName,
>
> ...) throws IOException {
>
> ...
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserNam
> e(),
>
> null, masked),
>
> clientName, clientMachine, flag.get(), createParent,
> replication, blockSize);
>
> ...
>
> }
>
>
>
> We can see that null value as the group parameter is given for
> PermissionStatus(String user, String group, FsPermission permission).
>
>
>
> 3. So the question is, why Hadoop intends for that, using the default
> value 'supergroup' as the initial ownership group for newly created files?
>
> If so, how fs permission checker checks permission against the file
> group? I guess it's not so relevant then.
>
>
>
> I'm learning Hadoop, and everything is new and interested. Thanks for
> answers, corrections, and anything.
>
>
>
> Kai
>
>
--
Harsh J
RE: Why newly created file is of supergroup as the file ownership
group ?
Posted by "Zheng, Kai" <ka...@intel.com>.
Thank you Harsh! Seems that I should have read the official docs more carefully.
-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com]
Sent: Friday, September 07, 2012 5:45 PM
To: user@hadoop.apache.org
Subject: Re: Why newly created file is of supergroup as the file ownership group ?
The Hadoop permissions model follows the BSD-style permissions. On http://hadoop.apache.org/common/docs/stable/hdfs_permissions_guide.html
we have this behavior mentioned "When a file or directory is created, its owner is the user identity of the client process, and its group is the group of the parent directory (the BSD rule).".
On Fri, Sep 7, 2012 at 2:50 PM, Zheng, Kai <ka...@intel.com> wrote:
> 1. As follows, the initial ownership group is supergroup, instead of
> any group of the operation user.
>
> [zk@hadoop-nn ~]$ whoami
>
> zk
>
> [zk@hadoop-nn ~]$ id -Gn
>
> zk ldapgroup
>
> [zk@hadoop-nn ~]$ echo "test hadoop cluster" > hadoop-test
>
> [zk@hadoop-nn ~]$ ls -l hadoop-test
>
> -rw-rw-r--. 1 zk zk 20 Sep 7 04:33 hadoop-test
>
> [zk@hadoop-nn ~]$ hadoop dfs -copyFromLocal hadoop-test
> /usr/hadoop/tmp/zk/
>
> [zk@hadoop-nn ~]$ hadoop dfs -lsr /usr/hadoop/tmp/zk/
>
> -rw-r--r-- 1 zk supergroup 20 2012-09-07 04:33
> /usr/hadoop/tmp/zk/hadoop-test
>
>
>
> Relevant hadoop configurations are below.
>
> hadoop.security.authentication --> Kerberos
>
> dfs.permissions --> true
>
> hadoop.security.authorization --> true
>
>
>
> 2. Looking at the codes, found the behavior is intended.
>
> In 1.0.3, in NameNode.java,
>
>
>
> /** {@inheritDoc} */
>
> public void create(String src,
>
> FsPermission masked,
>
> ...) throws IOException {
>
> ...
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserNam
> e(),
>
> null, masked),
>
> clientName, clientMachine, overwrite, createParent,
> replication, blockSize);
>
> ..
>
> }
>
>
>
> In hadoop trunk,
>
>
>
> @Override // ClientProtocol
>
> public void create(String src,
>
> FsPermission masked,
>
> String clientName,
>
> ...) throws IOException {
>
> ...
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserNam
> e(),
>
> null, masked),
>
> clientName, clientMachine, flag.get(), createParent,
> replication, blockSize);
>
> ...
>
> }
>
>
>
> We can see that null value as the group parameter is given for
> PermissionStatus(String user, String group, FsPermission permission).
>
>
>
> 3. So the question is, why Hadoop intends for that, using the default
> value 'supergroup' as the initial ownership group for newly created files?
>
> If so, how fs permission checker checks permission against the file
> group? I guess it's not so relevant then.
>
>
>
> I'm learning Hadoop, and everything is new and interested. Thanks for
> answers, corrections, and anything.
>
>
>
> Kai
>
>
--
Harsh J
RE: Why newly created file is of supergroup as the file ownership
group ?
Posted by "Zheng, Kai" <ka...@intel.com>.
Thank you Harsh! Seems that I should have read the official docs more carefully.
-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com]
Sent: Friday, September 07, 2012 5:45 PM
To: user@hadoop.apache.org
Subject: Re: Why newly created file is of supergroup as the file ownership group ?
The Hadoop permissions model follows the BSD-style permissions. On http://hadoop.apache.org/common/docs/stable/hdfs_permissions_guide.html
we have this behavior mentioned "When a file or directory is created, its owner is the user identity of the client process, and its group is the group of the parent directory (the BSD rule).".
On Fri, Sep 7, 2012 at 2:50 PM, Zheng, Kai <ka...@intel.com> wrote:
> 1. As follows, the initial ownership group is supergroup, instead of
> any group of the operation user.
>
> [zk@hadoop-nn ~]$ whoami
>
> zk
>
> [zk@hadoop-nn ~]$ id -Gn
>
> zk ldapgroup
>
> [zk@hadoop-nn ~]$ echo "test hadoop cluster" > hadoop-test
>
> [zk@hadoop-nn ~]$ ls -l hadoop-test
>
> -rw-rw-r--. 1 zk zk 20 Sep 7 04:33 hadoop-test
>
> [zk@hadoop-nn ~]$ hadoop dfs -copyFromLocal hadoop-test
> /usr/hadoop/tmp/zk/
>
> [zk@hadoop-nn ~]$ hadoop dfs -lsr /usr/hadoop/tmp/zk/
>
> -rw-r--r-- 1 zk supergroup 20 2012-09-07 04:33
> /usr/hadoop/tmp/zk/hadoop-test
>
>
>
> Relevant hadoop configurations are below.
>
> hadoop.security.authentication --> Kerberos
>
> dfs.permissions --> true
>
> hadoop.security.authorization --> true
>
>
>
> 2. Looking at the codes, found the behavior is intended.
>
> In 1.0.3, in NameNode.java,
>
>
>
> /** {@inheritDoc} */
>
> public void create(String src,
>
> FsPermission masked,
>
> ...) throws IOException {
>
> ...
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserNam
> e(),
>
> null, masked),
>
> clientName, clientMachine, overwrite, createParent,
> replication, blockSize);
>
> ..
>
> }
>
>
>
> In hadoop trunk,
>
>
>
> @Override // ClientProtocol
>
> public void create(String src,
>
> FsPermission masked,
>
> String clientName,
>
> ...) throws IOException {
>
> ...
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserNam
> e(),
>
> null, masked),
>
> clientName, clientMachine, flag.get(), createParent,
> replication, blockSize);
>
> ...
>
> }
>
>
>
> We can see that null value as the group parameter is given for
> PermissionStatus(String user, String group, FsPermission permission).
>
>
>
> 3. So the question is, why Hadoop intends for that, using the default
> value 'supergroup' as the initial ownership group for newly created files?
>
> If so, how fs permission checker checks permission against the file
> group? I guess it's not so relevant then.
>
>
>
> I'm learning Hadoop, and everything is new and interested. Thanks for
> answers, corrections, and anything.
>
>
>
> Kai
>
>
--
Harsh J
Re: Why newly created file is of supergroup as the file ownership
group ?
Posted by Harsh J <ha...@cloudera.com>.
The Hadoop permissions model follows the BSD-style permissions. On
http://hadoop.apache.org/common/docs/stable/hdfs_permissions_guide.html
we have this behavior mentioned "When a file or directory is created,
its owner is the user identity of the client process, and its group is
the group of the parent directory (the BSD rule).".
On Fri, Sep 7, 2012 at 2:50 PM, Zheng, Kai <ka...@intel.com> wrote:
> 1. As follows, the initial ownership group is supergroup, instead of
> any group of the operation user.
>
> [zk@hadoop-nn ~]$ whoami
>
> zk
>
> [zk@hadoop-nn ~]$ id -Gn
>
> zk ldapgroup
>
> [zk@hadoop-nn ~]$ echo "test hadoop cluster" > hadoop-test
>
> [zk@hadoop-nn ~]$ ls -l hadoop-test
>
> -rw-rw-r--. 1 zk zk 20 Sep 7 04:33 hadoop-test
>
> [zk@hadoop-nn ~]$ hadoop dfs -copyFromLocal hadoop-test /usr/hadoop/tmp/zk/
>
> [zk@hadoop-nn ~]$ hadoop dfs -lsr /usr/hadoop/tmp/zk/
>
> -rw-r--r-- 1 zk supergroup 20 2012-09-07 04:33
> /usr/hadoop/tmp/zk/hadoop-test
>
>
>
> Relevant hadoop configurations are below.
>
> hadoop.security.authentication --> Kerberos
>
> dfs.permissions --> true
>
> hadoop.security.authorization --> true
>
>
>
> 2. Looking at the codes, found the behavior is intended.
>
> In 1.0.3, in NameNode.java,
>
>
>
> /** {@inheritDoc} */
>
> public void create(String src,
>
> FsPermission masked,
>
> …) throws IOException {
>
> …
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserName(),
>
> null, masked),
>
> clientName, clientMachine, overwrite, createParent, replication,
> blockSize);
>
> ..
>
> }
>
>
>
> In hadoop trunk,
>
>
>
> @Override // ClientProtocol
>
> public void create(String src,
>
> FsPermission masked,
>
> String clientName,
>
> …) throws IOException {
>
> …
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserName(),
>
> null, masked),
>
> clientName, clientMachine, flag.get(), createParent, replication,
> blockSize);
>
> …
>
> }
>
>
>
> We can see that null value as the group parameter is given for
> PermissionStatus(String user, String group, FsPermission permission).
>
>
>
> 3. So the question is, why Hadoop intends for that, using the default
> value ‘supergroup’ as the initial ownership group for newly created files?
>
> If so, how fs permission checker checks permission against the file group? I
> guess it’s not so relevant then.
>
>
>
> I’m learning Hadoop, and everything is new and interested. Thanks for
> answers, corrections, and anything.
>
>
>
> Kai
>
>
--
Harsh J
Re: Why newly created file is of supergroup as the file ownership
group ?
Posted by Harsh J <ha...@cloudera.com>.
The Hadoop permissions model follows the BSD-style permissions. On
http://hadoop.apache.org/common/docs/stable/hdfs_permissions_guide.html
we have this behavior mentioned "When a file or directory is created,
its owner is the user identity of the client process, and its group is
the group of the parent directory (the BSD rule).".
On Fri, Sep 7, 2012 at 2:50 PM, Zheng, Kai <ka...@intel.com> wrote:
> 1. As follows, the initial ownership group is supergroup, instead of
> any group of the operation user.
>
> [zk@hadoop-nn ~]$ whoami
>
> zk
>
> [zk@hadoop-nn ~]$ id -Gn
>
> zk ldapgroup
>
> [zk@hadoop-nn ~]$ echo "test hadoop cluster" > hadoop-test
>
> [zk@hadoop-nn ~]$ ls -l hadoop-test
>
> -rw-rw-r--. 1 zk zk 20 Sep 7 04:33 hadoop-test
>
> [zk@hadoop-nn ~]$ hadoop dfs -copyFromLocal hadoop-test /usr/hadoop/tmp/zk/
>
> [zk@hadoop-nn ~]$ hadoop dfs -lsr /usr/hadoop/tmp/zk/
>
> -rw-r--r-- 1 zk supergroup 20 2012-09-07 04:33
> /usr/hadoop/tmp/zk/hadoop-test
>
>
>
> Relevant hadoop configurations are below.
>
> hadoop.security.authentication --> Kerberos
>
> dfs.permissions --> true
>
> hadoop.security.authorization --> true
>
>
>
> 2. Looking at the codes, found the behavior is intended.
>
> In 1.0.3, in NameNode.java,
>
>
>
> /** {@inheritDoc} */
>
> public void create(String src,
>
> FsPermission masked,
>
> …) throws IOException {
>
> …
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserName(),
>
> null, masked),
>
> clientName, clientMachine, overwrite, createParent, replication,
> blockSize);
>
> ..
>
> }
>
>
>
> In hadoop trunk,
>
>
>
> @Override // ClientProtocol
>
> public void create(String src,
>
> FsPermission masked,
>
> String clientName,
>
> …) throws IOException {
>
> …
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserName(),
>
> null, masked),
>
> clientName, clientMachine, flag.get(), createParent, replication,
> blockSize);
>
> …
>
> }
>
>
>
> We can see that null value as the group parameter is given for
> PermissionStatus(String user, String group, FsPermission permission).
>
>
>
> 3. So the question is, why Hadoop intends for that, using the default
> value ‘supergroup’ as the initial ownership group for newly created files?
>
> If so, how fs permission checker checks permission against the file group? I
> guess it’s not so relevant then.
>
>
>
> I’m learning Hadoop, and everything is new and interested. Thanks for
> answers, corrections, and anything.
>
>
>
> Kai
>
>
--
Harsh J
Re: Why newly created file is of supergroup as the file ownership
group ?
Posted by Harsh J <ha...@cloudera.com>.
The Hadoop permissions model follows the BSD-style permissions. On
http://hadoop.apache.org/common/docs/stable/hdfs_permissions_guide.html
we have this behavior mentioned "When a file or directory is created,
its owner is the user identity of the client process, and its group is
the group of the parent directory (the BSD rule).".
On Fri, Sep 7, 2012 at 2:50 PM, Zheng, Kai <ka...@intel.com> wrote:
> 1. As follows, the initial ownership group is supergroup, instead of
> any group of the operation user.
>
> [zk@hadoop-nn ~]$ whoami
>
> zk
>
> [zk@hadoop-nn ~]$ id -Gn
>
> zk ldapgroup
>
> [zk@hadoop-nn ~]$ echo "test hadoop cluster" > hadoop-test
>
> [zk@hadoop-nn ~]$ ls -l hadoop-test
>
> -rw-rw-r--. 1 zk zk 20 Sep 7 04:33 hadoop-test
>
> [zk@hadoop-nn ~]$ hadoop dfs -copyFromLocal hadoop-test /usr/hadoop/tmp/zk/
>
> [zk@hadoop-nn ~]$ hadoop dfs -lsr /usr/hadoop/tmp/zk/
>
> -rw-r--r-- 1 zk supergroup 20 2012-09-07 04:33
> /usr/hadoop/tmp/zk/hadoop-test
>
>
>
> Relevant hadoop configurations are below.
>
> hadoop.security.authentication --> Kerberos
>
> dfs.permissions --> true
>
> hadoop.security.authorization --> true
>
>
>
> 2. Looking at the codes, found the behavior is intended.
>
> In 1.0.3, in NameNode.java,
>
>
>
> /** {@inheritDoc} */
>
> public void create(String src,
>
> FsPermission masked,
>
> …) throws IOException {
>
> …
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserName(),
>
> null, masked),
>
> clientName, clientMachine, overwrite, createParent, replication,
> blockSize);
>
> ..
>
> }
>
>
>
> In hadoop trunk,
>
>
>
> @Override // ClientProtocol
>
> public void create(String src,
>
> FsPermission masked,
>
> String clientName,
>
> …) throws IOException {
>
> …
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserName(),
>
> null, masked),
>
> clientName, clientMachine, flag.get(), createParent, replication,
> blockSize);
>
> …
>
> }
>
>
>
> We can see that null value as the group parameter is given for
> PermissionStatus(String user, String group, FsPermission permission).
>
>
>
> 3. So the question is, why Hadoop intends for that, using the default
> value ‘supergroup’ as the initial ownership group for newly created files?
>
> If so, how fs permission checker checks permission against the file group? I
> guess it’s not so relevant then.
>
>
>
> I’m learning Hadoop, and everything is new and interested. Thanks for
> answers, corrections, and anything.
>
>
>
> Kai
>
>
--
Harsh J
Re: Why newly created file is of supergroup as the file ownership
group ?
Posted by Harsh J <ha...@cloudera.com>.
The Hadoop permissions model follows the BSD-style permissions. On
http://hadoop.apache.org/common/docs/stable/hdfs_permissions_guide.html
we have this behavior mentioned "When a file or directory is created,
its owner is the user identity of the client process, and its group is
the group of the parent directory (the BSD rule).".
On Fri, Sep 7, 2012 at 2:50 PM, Zheng, Kai <ka...@intel.com> wrote:
> 1. As follows, the initial ownership group is supergroup, instead of
> any group of the operation user.
>
> [zk@hadoop-nn ~]$ whoami
>
> zk
>
> [zk@hadoop-nn ~]$ id -Gn
>
> zk ldapgroup
>
> [zk@hadoop-nn ~]$ echo "test hadoop cluster" > hadoop-test
>
> [zk@hadoop-nn ~]$ ls -l hadoop-test
>
> -rw-rw-r--. 1 zk zk 20 Sep 7 04:33 hadoop-test
>
> [zk@hadoop-nn ~]$ hadoop dfs -copyFromLocal hadoop-test /usr/hadoop/tmp/zk/
>
> [zk@hadoop-nn ~]$ hadoop dfs -lsr /usr/hadoop/tmp/zk/
>
> -rw-r--r-- 1 zk supergroup 20 2012-09-07 04:33
> /usr/hadoop/tmp/zk/hadoop-test
>
>
>
> Relevant hadoop configurations are below.
>
> hadoop.security.authentication --> Kerberos
>
> dfs.permissions --> true
>
> hadoop.security.authorization --> true
>
>
>
> 2. Looking at the codes, found the behavior is intended.
>
> In 1.0.3, in NameNode.java,
>
>
>
> /** {@inheritDoc} */
>
> public void create(String src,
>
> FsPermission masked,
>
> …) throws IOException {
>
> …
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserName(),
>
> null, masked),
>
> clientName, clientMachine, overwrite, createParent, replication,
> blockSize);
>
> ..
>
> }
>
>
>
> In hadoop trunk,
>
>
>
> @Override // ClientProtocol
>
> public void create(String src,
>
> FsPermission masked,
>
> String clientName,
>
> …) throws IOException {
>
> …
>
> namesystem.startFile(src,
>
> new
> PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserName(),
>
> null, masked),
>
> clientName, clientMachine, flag.get(), createParent, replication,
> blockSize);
>
> …
>
> }
>
>
>
> We can see that null value as the group parameter is given for
> PermissionStatus(String user, String group, FsPermission permission).
>
>
>
> 3. So the question is, why Hadoop intends for that, using the default
> value ‘supergroup’ as the initial ownership group for newly created files?
>
> If so, how fs permission checker checks permission against the file group? I
> guess it’s not so relevant then.
>
>
>
> I’m learning Hadoop, and everything is new and interested. Thanks for
> answers, corrections, and anything.
>
>
>
> Kai
>
>
--
Harsh J