You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Kan Zhang (JIRA)" <ji...@apache.org> on 2008/10/05 22:05:44 UTC

[jira] Created: (HADOOP-4348) Adding service-level authorization to Hadoop

Adding service-level authorization to Hadoop
--------------------------------------------

                 Key: HADOOP-4348
                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
             Project: Hadoop Core
          Issue Type: New Feature
            Reporter: Kan Zhang
            Assignee: Arun C Murthy
             Fix For: 0.20.0


Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.

Here is an initial list of requirements I came up with.

    1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.

    2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.

    3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.

    4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.

    5. This feature can be switched off and is off by default.

I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642227#action_12642227 ] 

Owen O'Malley commented on HADOOP-4348:
---------------------------------------

The blocker from my point of view is that if we put it just in the RPC layer, it is impossible to use any secure transport. In particular, I don't want to block the capability of have kerberos encrypted rpc channels. This is very important. Furthermore, it would be much much better if the kerberos authentication just had to happen once per a connection rather than once per a rpc.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Kan Zhang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642290#action_12642290 ] 

Kan Zhang commented on HADOOP-4348:
-----------------------------------

> It might be easier to encrypt and decrypt individual messages, implement handshakes, etc. on top of a binary message exchange protocol than to intermix that logic with the non-blocking client and server message exchange implementation.

The implementation should still be layered. You can think of it as a wrapper object that internally wraps Kerberos layer on top of a normal non-blocking socket and externally exposes a non-blocking socket interface. This will make Kerberos transparent to whatever message exchange protocol you want to run on top of it, be it RPC or not.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642251#action_12642251 ] 

Doug Cutting commented on HADOOP-4348:
--------------------------------------

Owen> if we put it just in the RPC layer, it is impossible to use any secure transport

That's an argument about authentication, not authorization, no?  Also, does Kerberos really require a separate TCP connection per client, or rather does it require separate handshakes and encryptions per client?  We could establish a separate session per client over a shared connection...

A useful transport abstraction is reliable binary message exchange.  UDP is unreliable and limits the message size, and TCP is stream-based.  We wish to exchange binary messages with a server as efficiently as possible.  It will be simpler to focus on this problem if we can keep it separate from others.  Handshakes, encryption, sessions, etc, can all be layered on top of this abstraction, just as SSL and HTTP are layered on TCP.  Perhaps this abstraction will prove impractical...

We need to agree on what features our transport layer provides.  Do we want the lowest level to provide authenticated, authorized message exchange, or simply message channels, with authentication and authorization layered above.  Or do we want to build on SSL-like standards (authenticated byte streams) and layer per-user message exchange on that?  Even in the last case, authorization can be layered above message exchange, and I think the lower we can push message exchange the simpler things will be.  We don't want application authorization logic mixed in with async i/o concerns if we can possibly avoid it.


> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12653573#action_12653573 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-4348:
------------------------------------------------

Some comments on the patch:

- Do we need to check for superuser in order to run refreshAuthorizationPolicy() in MRAdmin and DFSAdmin?  

- Perission subclasses must be immutable.  So their fields should be final.

- Similarly, User and Group principals are better be immutable.

- Why any ConnectionPermission implies ConnectionPermission of VersionedProtocol?  Also, the implies(..) method need javadoc.

- ConnectionPermission .getActions() should not return null.  How about returning ALLOWED?  Then, we could support negative permission setting by adding DENIED in the future.

- Need descriptions for the properties in hadoop-policy.xml, especially for the syntax.  These documentation changes may be done in a separated issue.

- I think using multiplication in hashCode() may not be a good idea.  How about
address.hashCode() ^ protocol.hashCode() ^ System.identityHashCode(ticket)?

- Should we throw IOException instead of RuntimeException when there is a LoginException?  Also, it is better the include the cause exception in the new exception.

- In AuthorizationException, how about calling setStackTrace(null) to clean up the trace?

- The changes in IsolationRunner and LocalJobRunner should be reverted.


> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Enis Soztutar (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Enis Soztutar updated HADOOP-4348:
----------------------------------

    Attachment: jaas_service_v1.patch

Attached is a patch to illustrate my point. It is in *very* early state.

The patch introduces UserPrincipal and GroupPrincipal defining user name and groups. There is a solid JAASUUGI implementation that can be used in hdfs and mapred. 
We can construct JAASUUGI from kerberos, simple username, or conf based, or use unixuugi. 
security.permissions package contains permissions that can be forced. ConnectionPermission is forced in IPC layer(after the connection header is read) and RPCProtocolPermission can be forced in the RPC layer, before processing the call. 

The patch does not work for now, but I guess it may raise the discussion. Thoughts ? 

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Attachment: HADOOP-4348_4_20081205.patch

Correct patch.

All unit tests pass...

{noformat}
 [exec] -1 overall.  
     [exec] 
     [exec]     +1 @author.  The patch does not contain any @author tags.
     [exec] 
     [exec]     +1 tests included.  The patch appears to include 28 new or modified tests.
     [exec] 
     [exec]     +1 javadoc.  The javadoc tool did not generate any warning messages.
     [exec] 
     [exec]     +1 javac.  The applied patch does not increase the total number of javac compiler warnings.
     [exec] 
     [exec]     +1 findbugs.  The patch does not introduce any new Findbugs warnings.
     [exec] 
     [exec]     -1 Eclipse classpath. The patch causes the Eclipse classpath to differ from the contents of the lib directories.
{noformat}


I'm not sure why test-patch.sh is complaining about Eclipse classpath - this patch certainly doesn't introduce any new jars.
Here is the relevant output I can't seem to make sense of:
{noformat}
     [exec] ======================================================================
     [exec] ======================================================================
     [exec]     Running Eclipse classpath verification.
     [exec] ======================================================================
     [exec] ======================================================================
     [exec] 
     [exec] 
     [exec] 
     [exec] FAILED. Some jars are not declared in the Eclipse project.
     [exec]   Declared jars: lib/commons-cli-2.0-SNAPSHOT.jar
     [exec] lib/commons-codec-1.3.jar
     [exec] lib/commons-httpclient-3.0.1.jar
     [exec] lib/commons-logging-1.0.4.jar
     [exec] lib/commons-logging-api-1.0.4.jar
     [exec] lib/commons-net-1.4.1.jar
     [exec] lib/hsqldb-1.8.0.10.jar
     [exec] lib/jets3t-0.6.1.jar
     [exec] lib/jetty-6.1.14.jar
     [exec] lib/jetty-util-6.1.14.jar
     [exec] lib/jsp-2.1/core-3.1.1.jar
     [exec] lib/jsp-2.1/jsp-2.1.jar
     [exec] lib/jsp-2.1/jsp-api-2.1.jar
     [exec] lib/junit-3.8.1.jar
     [exec] lib/kfs-0.2.2.jar
     [exec] lib/log4j-1.2.15.jar
     [exec] lib/oro-2.0.8.jar
     [exec] lib/servlet-api-2.5-6.1.14.jar
     [exec] lib/slf4j-api-1.4.3.jar
     [exec] lib/slf4j-log4j12-1.4.3.jar
     [exec] lib/xmlenc-0.52.jar
     [exec] src/test/lib/ftplet-api-1.0.0-SNAPSHOT.jar
     [exec] src/test/lib/ftpserver-core-1.0.0-SNAPSHOT.jar
     [exec] src/test/lib/ftpserver-server-1.0.0-SNAPSHOT.jar
     [exec] src/test/lib/mina-core-2.0.0-M2-20080407.124109-12.jar
     [exec]   Present jars:  lib//commons-cli-2.0-SNAPSHOT.jar
     [exec] lib//commons-codec-1.3.jar
     [exec] lib//commons-httpclient-3.0.1.jar
     [exec] lib//commons-logging-1.0.4.jar
     [exec] lib//commons-logging-api-1.0.4.jar
     [exec] lib//commons-net-1.4.1.jar
     [exec] lib//hsqldb-1.8.0.10.jar
     [exec] lib//jets3t-0.6.1.jar
     [exec] lib//jetty-6.1.14.jar
     [exec] lib//jetty-util-6.1.14.jar
     [exec] lib//jsp-2.1/core-3.1.1.jar
     [exec] lib//jsp-2.1/jsp-2.1.jar
     [exec] lib//jsp-2.1/jsp-api-2.1.jar
     [exec] lib//junit-3.8.1.jar
     [exec] lib//kfs-0.2.2.jar
     [exec] lib//log4j-1.2.15.jar
     [exec] lib//oro-2.0.8.jar
     [exec] lib//servlet-api-2.5-6.1.14.jar
     [exec] lib//slf4j-api-1.4.3.jar
     [exec] lib//slf4j-log4j12-1.4.3.jar
     [exec] lib//xmlenc-0.52.jar
     [exec] src/test/lib//ftplet-api-1.0.0-SNAPSHOT.jar
     [exec] src/test/lib//ftpserver-core-1.0.0-SNAPSHOT.jar
     [exec] src/test/lib//ftpserver-server-1.0.0-SNAPSHOT.jar
     [exec] src/test/lib//mina-core-2.0.0-M2-20080407.124109-12.jar
{noformat}

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642016#action_12642016 ] 

Doug Cutting commented on HADOOP-4348:
--------------------------------------

> The problem is that the service authorization should be checked at the ipc layer rather then the rpc layer.

Why?  I think authorization should be per-request, not per connection.  That will permit us transport flexibility.  We can and should multiplex requests and responses over shared pipes.  It would be a feature if each request is self-contained, not relying on connection state.  Our RPC system should be atop a message-based API, not a connection-based API.  Connection management is an implementation detail of that layer.  Authorization is protocol-specific and eventually method and parameter specific and thus in the domain of RPC.  The transport should know nothing of protocols, methods and parameters.  The transport should serialize requests and deliver them to a request processor.



> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Sanjay Radia (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642278#action_12642278 ] 

Sanjay Radia commented on HADOOP-4348:
--------------------------------------

Eris,
  I think your point is:
# rather then invent new authorization layers use JAAS
# we should have used JAAS in the first place when we did UGI.

When we did permissions/UGI we did look at JAAS briefly but due to time pressures we went with our own impl. Our feeling was to reexamine this later.
Basically we set the UGI context in a thread local variable. JAAS does the same.

I think the rest of the debate in this jira about  which layer (ipc or rpc) to use still stands. 
JAAS  does support the per call authorization; don''t know if JAAS supports session level authorization when the session is created. However, JAAS combined with  GSS does  authentication at the connection level.

If we use Jaas then 
# we should not turn on the java security manager - it is not needed and it is expensive
# we should not put the acl in the java security policy file - the policy file syntax is complex and not necessary for us.

I believe JMX used JAAS and it does not turn on the security manager and further puts the ACL in a separate file ( I will verify if JMX used JAAS).

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Hemanth Yamijala (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12637365#action_12637365 ] 

Hemanth Yamijala commented on HADOOP-4348:
------------------------------------------

Per HADOOP-3698, we specify authorized user lists for each operation using a configuration variable such as {{mapred.queue.queue-name.operation-name}} in hadoop-site or hadoop-default. The format is something like this:
{code:xml}
<property>
  <name>mapred.queue.default.acl-administer-jobs</name>
  <value>alice,bob group1,group2</value>
</property>
{code}

'*' is used to specify 'all allowed'. Blank is used to specify none allowed.

I meant these kind of interfaces - to the administrator, and think they should be consistent between the ACLs feature and what we choose to do here in this JIRA.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642260#action_12642260 ] 

Doug Cutting commented on HADOOP-4348:
--------------------------------------

Sanjay, I'm fine adding a session layer someday.  That would probably be useful.

Much of what you speak of is authentication, which may have to be session based rather than per-message.  But, so far as I can see, service-level authorization belongs somewhere between our two existing layers, so we need to choose one, rather arbitrarily.  My initial observation was merely, with the two layers we have now, it would be more simply implemented at the RPC layer.  Looking at the patch, i think would take fewer lines of easier-to-read code to implement this feature there.  (Compare earlier with later versions of HADOOP-4049.)  Authorization, as specified in this issue, requires just a lookup of the username and group in a hashtable, so performance should not be an issue.

So if we want to add service-level authorization now, without first re-architecting things, and we don't see a strong performance or functionality reason against checking service-level authorization per-request, then we should put it at the RPC layer.  If we later re-architect things into more layers, as we probably should, then service-level authorization will end up in some new, intermediate layer.

My initial comments were a response to the patch, not an architectural treaty.  Let's keep things grounded in code.  If someone wants to re-architect RPC in this patch, that's fine.  But until then, we need to decide how to fit the desired feature into the existing code while making the least mess.  If someone can contribute a patch which implements this feature as cleanly and simply at the IPC level alone as I think it can be done at the RPC level that'd be great.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Hemanth Yamijala (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12637358#action_12637358 ] 

Hemanth Yamijala commented on HADOOP-4348:
------------------------------------------

Kan, is this understanding correct:

- Service level authorization will check the initial connection of a client to verify if the user is a 'valid user' in this cluster.
- The kind of authorization we added in HADOOP-3698 comes later, and is more fine grained. That is, the user is a 'valid user' in the cluster, but does he have rights to, for e.g., submit a job.

I think it would be a requirement that we use similar interfaces (say, to the adminstrator and for any APIs we propose) for these different, but clearly related, types of authorization. Does this make sense ?

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Hemanth Yamijala (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12636990#action_12636990 ] 

Hemanth Yamijala commented on HADOOP-4348:
------------------------------------------

Is the part about specifying users/groups and allowing access only to them similar to HADOOP-3698, which was fixed for Hadoop 0.19 ?

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Attachment: HADOOP-4348_0_20081022.patch

Preliminary patch for review while I continue testing. Things to do:
1. Test cases
2. Command line utilities for reloading the auth config etc.


> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Sanjay Radia (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12655426#action_12655426 ] 

Sanjay Radia commented on HADOOP-4348:
--------------------------------------

Went over your patch. Some feedback:


1) ConnectionHeader should have as javadoc since it is part of the protocol.

2) It is bizarre for ConnectionHeader to be configurable!!
   Server side implementation details should not leak into protocol types.
   I see why you have done it. Change getProtocol to return a string and
convert to the class name outside. 

3) The Phantom "authorize" call.
3a) Add comments to the status enum:
  In particular 
  FATAL(-1) // occurs if the connection authorization fails.
            // Only occurs with callid of -1.

3b) You use call id of -1 for authorize call.
   Define a constant for AUTHORIZE_CONNECTION_CALLID = -1

3c) Given that FATAL only occurs for specific speail Phantom calls,
   it would be cleaner to for you to deduce that authorization failed
   by writing your code as:

{code}
   if (state == Status.FATAL.state) {
      if (id == AUTHORIZE_CONNECTION_CALLID) { 
         // authorization failed
      } else If (id == PROTOCOL_CHECK_CALLID ) {
         // protocol does not match  ( - see my Item-10 below)
      .. .other phantom calls
   }
{code}

3d) Somewhere in the connection establishment code or receiveResponse,
    you need to add a comment that when connection is established, 
    a failed authorization check will respond with a call-response
    for  the Phantom callid of AUTHORIZE_CONNECTION_CALLID.

Consider the following:
authorization could be done as part of creating the proxy on the client side
and that the proxy creation should fail if authorization fails.  In your code
there is no real AUTHORIZE_CONNECTION_CALL. Only a reply to the call when the
call fails.  Doing a real AUTHORIZE_CONNECTION_CALL when proxy is created would
be conceptually cleaner and should be considered; its counter argument is that
 it would add latency to the first call. 
If we decide to do this perhaps it should be a separate jira.


4) Coding Style:  ipc.server.java
 Move the following variable to above readAndProcess()
  -- it uses the authFailedCall variable declared *afterwards*:
    Call authFailedCall = new Call(-1, null, null);

5) Following comment in code is incorrect.
{code}
    /// Reads the connection header following version and 
    /// authorizes the connection
    private void processHeader() throws IOException {
{code}

    The authorization is not done in this processHeader(); it is done by the
     callee after header is processed. Please fix comment. 
(Actually it would be cleaner for processHeader() to do the authorization and throw the
exception. If you decide to do this call it processHeaderAndAuthorize().

6. ipc.server.authorize()
   - should this be an abstract method with no default implementation?

7. Add java doc for the params for the ipc.server.call()

8. You deprecated two call() methods in ipc.server.
   Why not remove these deprecated methods since IPC/RPC is an hadoop
   internal/private interface 

9. Move your new method setupResponse to the Call class. 

10.  On server side check the class name (protocol) against any base class of
    the server-instance when the connection header is received.
  - if protocol does not match then reply FATAL and close the connection.
     (see my Item-3c above)

11.  On the IPC/RPC server side you eliminated the need for UGI by using
     the JASS subject. This is good.
File jiras to similarly use JAAS in other parts of the code. 

 11a) File jira remove  UGI on rpc/ipc client side (ie use JASS instead) 
    - None of the getProxy should have a UGI parameter
        - the getProxy should get it via login
           (perhaps only if if there is no thread subject ?? not sure)

 11b) File jira for fixing the calls to UGI in our various servers (NN, DN, JT etc)
     - getCurrentUGI should be private, add  public getCurrentUser() & getCurrentGroup()

12. Add javadoc in securityUtil to say that it is for service level authorization.
    Or is it more general than that?.
   Methods like authorize apply only to service level access not to file
  or object level access.  Hence method names should have the words  that indicate
  that is related to "service level authorization".

13. service.java - what is the servicekey (is it serviceName?).
I don't understand what getPermission is? Is it the permission granted for the service?
Your comment is a little confusing.


14. HDFSPolicyProvider and MRPolicyProvider.

 14a)  Add java doc which says
   -  this the default policy provider for the... and that these are
       the only protocols that will be allowed access.
   - the acl in the acl policy file will specify the list of users/groups allowed access.

  14b) I think we can do without these (you can do this in a new jira if you like)
 - add a parameter to getProxy( existing parameters, protocolList);
     protocolList is an array (or a var args) that is a triple
          {className, version#, protocolKey}

15) Move the doAs in securityUtil.authorize() to the IPC/RPC layer
   (it belongs there and will match the other doAs in the ipc/rpc layer).
      Hence you can remove the authorize() method from the securityUtil.

In the Item-14b jira when we change the rpc.getServer() to add the rpc interfaces list,
  move the setPolicy to inside the rpc layer. This will then simplify all server code
  that create rpc servers. 

16) SecurityUtil.getSubject()
   create the hash set with right number of entries (user plus number of groups)
    - note this is called on every rpc, should be reasonably efficient.
  Question - should this be a static method be on the UGI class instead?

17) You added a method to dfsAdmin called refreshAuthorizationPolicy().
Some points
  - Refresh is about refreshing the policy file which contains the the acls; correct?
      -if so change the name of method to refreshServiceAcls or something like that.
      - also change the method RefreshAuthorizationPolicyProtocol.refresh() to
         refreshServiceAcls()
          - refresh is too general a name  for method inside a mixin interface.


18) General comment: make sure the that all the classes/interfaces you added to
src/core have good javadoc so that folks know when and how to use them.



> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, HADOOP-4348_5_20081206.patch, HADOOP-4348_6_20081209.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Attachment: HADOOP-4348_2_20081202.patch

Updated patch with command-line utilities support for 'hup' once hadoop-policy.xml is changed by the admin...

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Sanjay Radia (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642024#action_12642024 ] 

Sanjay Radia commented on HADOOP-4348:
--------------------------------------

Strictly, rpc does not imply any notion of a connection. One could have a connection per rpc method.
The sharing of connection is merely an optimization.

I think the question you have raised is deeper than at which layer to implement service authorization.

We already have method level authorization in HDFS - each method implements its own authorization against the file object being accessed.
This Jira is proposing to added service level authorization.
The notion of service level authorization (as opposed to rpc method level authorization) implies that one is performing authorization for a session of
rpc calls. Since strictly speaking rpc has no notion of a session as stated above, one  *could* argue that,  from a layering point of view,  service level authorization does not make sense for rpc in the first place. 
But conceptually we *do* want service level authorization (as in a set of users are allowed to access methods in a particular service).
 The best way to represent that service access is when a service proxy object is created - e.g when the connection is established. 
We could share multiple service sessions in a single connection but that complexity is not worth it.

So Doug, I see your argument to be equivalent to  arguing against service level authorization and that method level authorization is sufficient.
Clearly method level is sufficient. I feel service level is useful (even though not necessary) and that it is best captured below the method level invokation. In the current impl that is at the ipc layer. When you created the ipc layer did you envision multiple upper layers (such as rpc, reliable data grams, streaming, etc) to share a single instance of the ipc layer? If  you did then I can perhaps understand your point of view. Would you be happier if we created an intermediate layer, say rpc-session, in between. I am not seriously suggesting we do that. But if conceptually that would make you happier then simply assume that we have  decided to merely put one session per connection. 

Does the above help or further confuses the matter?

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Attachment: HADOOP-4348_1_20081201.patch

Preliminary patch for review while I add test-cases etc.

I have taken Enis's patch further along... highlights:
# Use JAAS for authorization via ConfiguredPolicy/PolicyProvider. Introduced a hadoop-policy.xml which has ACL lists for protocols similar to the one introduced by HADOOP-3698 (I've ensured both share the same infrastructure).
# Use JAAS for identity management: introduced a User and Group JAAS principal which are associated with the (authenticated) Subject in-lieu of UserGroupInformation.
# Use Subject.doAs for both authorization and also from ipc.Server.Handler there-by doing away with the thread-local UGI we use currently.
 


> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12653373#action_12653373 ] 

Arun C Murthy commented on HADOOP-4348:
---------------------------------------

Thanks for the review Enis.

bq. Is there any advantage of doing authentication check in RPC rather than ipc.Server? If this one is better, then shouldn't ipc.Server#authorize() be an abstract method rather than one with empty body.
There are implementations of IPC which do not need authorization (e.g. test cases), which is why pulled the check into RPC.Server. I made it a concrete no-op implementation in ipc.Server to ensure that it isn't a in-compatible change...

bq. <name>security.connectionPermission.org.apache.hadoop.mapred.JobSubmissionProtocol</name>
I'm uncomfortable with this approach because it exposes cluster admins to the actual Java protocols (many of which are package-private e.g. JobSubmissionProtocol) and as you've pointed out it leads to more complicated handling (Configuration.getKeys followed by a search for keys starting with a specific prefix).

I went the route of PolicyProvider to avoid enshrining code (actual protocols) in config files and exposing admins to them. It got a bit more complicated (HDFSPolicyProvider and MapReducePolicyProvider) because some protocols aren't public (JobSubmissionProtocol, TaskUmbilicalProtocol etc.).

bq. In ConfiguredPolicy#refresh() with every reload request conf.resources arraylist gets appended which is a leak.
Good catch - I'll fix this! (and the logging of course!) 

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Component/s: security

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Attachment: HADOOP-4348_7_20081210.patch

Previous patch had a minor error, fixed.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, HADOOP-4348_5_20081206.patch, HADOOP-4348_6_20081209.patch, HADOOP-4348_7_20081210.patch, HADOOP-4348_7_20081210.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Attachment: HADOOP-4348_4_20081205.patch

Fixed a couple of minor findbugs warnings...

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, HADOOP-4348_4_20081205.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12643117#action_12643117 ] 

Arun C Murthy commented on HADOOP-4348:
---------------------------------------

@Sanjay/Doug - It's fairly easy to push it down a level from RPC to IPC by passing in the AuthorizationManager, so I agree. However I'm a tad reluctant since I believe it's fair-game to have a hook on connection-setup, rather than leak the AuthorizationManager into the IPC layer... *shrug*.

@Enis - Thanks for the JAAS patch, I spent a fair bit of time researching - my take:

1. The 'policy' file is extremely ungainly and very hard to use. It also is 'principal-centric' and we'd really like it to be resource-centric i.e. the 'protocol' in this case (as in: who can access JobSubmissionProtocol?). I cannot seem to find a way around using the policy file at all i.e. say plugin our own ACL list via a config file etc. I do see java.security.Policy, but I don't see it having the abstractions we need?
2. We really do not want to turn on the Java Security Manager, it has a significant performance penalty. From what I gather (http://www.jaasbook.com/ and specifically http://www.jaasbook.com/pdfs/jaas-in-action_chapter05-03.pdf) it turns out that the recommended way to use AccessController.checkPermission is via SecurityManager.checkPermission(). This might be trouble!

I'd love your take on this - maybe I'm missing other features of JAAS? However, I think the 2 points we need to keep in mind are the same: do not use the java policy file and turn off the SecurityManager. 

Thoughts?

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642322#action_12642322 ] 

Doug Cutting commented on HADOOP-4348:
--------------------------------------

> The implementation should still be layered.

I agree.  But if we have to implement an SSL-like layer ourselves, it might be simpler to implement it as a session layered on a binary message-exchange API than as a non-blocking socket.  Non-blocking i/o is an awkward coding paradigm, since you have to assemble messages (including headers) incrementally and maintain control state explicitly.


> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Issue Comment Edited: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12643117#action_12643117 ] 

acmurthy edited comment on HADOOP-4348 at 10/27/08 7:43 PM:
-----------------------------------------------------------------

@Sanjay/Doug - It's fairly easy to push it down a level from RPC to IPC by passing in the AuthorizationManager, so I agree. However I'm a tad reluctant since I believe it's fair-game to have a hook on connection-setup, rather than leak the AuthorizationManager into the IPC layer... *shrug*.

@Enis - Thanks for the JAAS patch, I spent a fair bit of time researching - my take:

1. The 'policy' file is extremely ungainly and very hard to use. It also is 'principal-centric' and we'd really like it to be resource-centric i.e. the 'protocol' in this case (as in: who can access JobSubmissionProtocol?). I cannot seem to find a way around using the policy file at all i.e. say plugin our own ACL list via a config file and get AccessController.checkPermission to use it. I do see java.security.Policy, but I don't see it having the abstractions we need?
2. We really do not want to turn on the Java Security Manager, it has a significant performance penalty. From what I gather (http://www.jaasbook.com/ and specifically http://www.jaasbook.com/pdfs/jaas-in-action_chapter05-03.pdf) it turns out that the recommended way to use AccessController.checkPermission is via SecurityManager.checkPermission(). Something to keep in mind.

I'd love your take on this - maybe I'm missing other features of JAAS? However, I think the 2 points we need to keep in mind are the same: do not use the java policy file and turn off the SecurityManager. 

Thoughts?

      was (Author: acmurthy):
    @Sanjay/Doug - It's fairly easy to push it down a level from RPC to IPC by passing in the AuthorizationManager, so I agree. However I'm a tad reluctant since I believe it's fair-game to have a hook on connection-setup, rather than leak the AuthorizationManager into the IPC layer... *shrug*.

@Enis - Thanks for the JAAS patch, I spent a fair bit of time researching - my take:

1. The 'policy' file is extremely ungainly and very hard to use. It also is 'principal-centric' and we'd really like it to be resource-centric i.e. the 'protocol' in this case (as in: who can access JobSubmissionProtocol?). I cannot seem to find a way around using the policy file at all i.e. say plugin our own ACL list via a config file etc. I do see java.security.Policy, but I don't see it having the abstractions we need?
2. We really do not want to turn on the Java Security Manager, it has a significant performance penalty. From what I gather (http://www.jaasbook.com/ and specifically http://www.jaasbook.com/pdfs/jaas-in-action_chapter05-03.pdf) it turns out that the recommended way to use AccessController.checkPermission is via SecurityManager.checkPermission(). This might be trouble!

I'd love your take on this - maybe I'm missing other features of JAAS? However, I think the 2 points we need to keep in mind are the same: do not use the java policy file and turn off the SecurityManager. 

Thoughts?
  
> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Enis Soztutar (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12653346#action_12653346 ] 

Enis Soztutar commented on HADOOP-4348:
---------------------------------------

I have a few questions regarding the patch :

#Is there any advantage of doing authentication check in RPC rather than ipc.Server? If this one is better, then shouldn't ipc.Server#authorize() be an abstract method rather than one with empty body. 

# I understand that current configuration of policy is inline with queue configuration. However I am not comfortable with Service and PolicyProvider classes only holding key name ->protocol name mappings. I propose we 
## introduce Configuration#keys() method returning key names in the configuration.
## configure the acls like :
{code} 
<property>
  <name>security.connectionPermission.org.apache.hadoop.mapred.JobSubmissionProtocol</name>
  <value>*</value>
  <description></description>
</property>
{code} 
## search for keys starting with prefix "security.connectionPermission" and load them. Sure, the whole configuration keys will be searched, which is in the order of 100s, but I guess it will be acceptable. Moreover we can use configuration w/o default resources (see below).

#In ConfiguredPolicy#refresh() with every reload request conf.resources arraylist gets appended which is a leak. I guess we should create a new Configuration w/o loading default resources and add hadoop-policy.xml to it. 
Alternatively we can get rid of hadoop-policy.xml, and configure everything with hadoop-site.xml (core-site.xml), and use Configuration#reloadConfiguration() in ConfiguredPolicy#refresh(). 

Minor things : 
#mradmin and refresh-auth-policy seems good. 
# the LOG.info() in ConfiguredPolicy#implies() seems redundant. 

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Enis Soztutar (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Enis Soztutar updated HADOOP-4348:
----------------------------------

    Attachment: jaas_service_v2.patch

Attaching the patch for review. 
works, but lacks test cases / javadoc.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch, jaas_service_v2.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Enis Soztutar (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12653859#action_12653859 ] 

Enis Soztutar commented on HADOOP-4348:
---------------------------------------

bq. There are implementations of IPC which do not need authorization (e.g. test cases), which is why pulled the check into RPC.Server. I made it a concrete no-op implementation in ipc.Server to ensure that it isn't a in-compatible change...
We can run the tests disabling authorization, no? 

bq. I went the route of PolicyProvider to avoid enshrining code (actual protocols) in config files and exposing admins to them. It got a bit more complicated (HDFSPolicyProvider and MapReducePolicyProvider) because some protocols aren't public (JobSubmissionProtocol, TaskUmbilicalProtocol etc.).
Well, in order for the admin to configure per protocol, we must somehow expose the private protocols after all. From my point of view exposing "inter.datanode.protocol" or "org.apache.hadoop.server.protocol.InterDatanodeProtocol" does not differ. Having said that, I'm Ok with current way. 


> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Attachment:     (was: HADOOP-4348_4_20081205.patch)

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12649440#action_12649440 ] 

Arun C Murthy commented on HADOOP-4348:
---------------------------------------

I'm working on the design doc and the test plan, I'll put it up shortly.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Sanjay Radia (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12655472#action_12655472 ] 

Sanjay Radia commented on HADOOP-4348:
--------------------------------------

+1

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, HADOOP-4348_5_20081206.patch, HADOOP-4348_6_20081209.patch, HADOOP-4348_7_20081210.patch, HADOOP-4348_7_20081210.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Status: Open  (was: Patch Available)

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Sanjay Radia (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642365#action_12642365 ] 

Sanjay Radia commented on HADOOP-4348:
--------------------------------------

Doug> My initial comments were a response to the patch, not an architectural treaty. Let's keep things grounded in code. If someone wants to re-architect RPC in this patch, that's fine. But until then, we need to decide how to fit the desired feature into the existing code while making the least mess. If someone can contribute a patch which implements this feature as cleanly and simply at the IPC level alone as I think it can be done at the RPC level that'd be great.

Sorry I misunderstood you earlier. I though you were opposed to putting the service level authorization in the ipc layer even if it is doable.
 I agree it belongs in one layer not two. Also agree with you that if we cannot  cleanly do this in the ipc layer  then the rpc-level is the way to go. As I noted earlier I think Arun's patch can be modified easily to move everything to the ipc layer:
>The problem is that this patch stores a pointer to the authorization manager in the rpc layer (at rpc end-point creation time),
> It would be better to
>let the ipc layer store this pointer. Then everything would happen in the ipc layer which is the correct place to do it.
Arun would my suggestion work or I am being naive?



> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Enis Soztutar (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12647658#action_12647658 ] 

Enis Soztutar commented on HADOOP-4348:
---------------------------------------

1. Using JAAS subject can make it much easier for kerberos based authentication. However UGI is heavily used in HDFS. I guess in the short term we should use both and gradually switch to subject. Instead of passing UGI in connection header, and generating subject from UGI, we may pass subject in header and generate UGI from it. 
2-3. Agree
4. +1 on making UGI internal. 
5. The core to authentication in JAAS is LoginModule, which is responsible for authentication implementations. We can have text file, configuration, database, or LDAP based login modules. Activemq has classes for several LoginModules : http://svn.apache.org/viewvc/activemq/trunk/activemq-jaas/src/main/java/org/apache/activemq/jaas/
I think user-group mapping can be done in for example LDAPLoginModule.  

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Kan Zhang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12637765#action_12637765 ] 

Kan Zhang commented on HADOOP-4348:
-----------------------------------

I see. We could do something similar here for our user list. Using hadoop-site instead of a separate conf file is one less file to manage. We don't have semantics like '*' or blank. A client is either a valid user or not.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Vinod K V (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12637925#action_12637925 ] 

Vinod K V commented on HADOOP-4348:
-----------------------------------

Do we also wish to identify the Hostnames/IP addresses for this kind of authorization?

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Status: Patch Available  (was: Open)

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, HADOOP-4348_5_20081206.patch, HADOOP-4348_6_20081209.patch, HADOOP-4348_7_20081210.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Enis Soztutar (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12644039#action_12644039 ] 

Enis Soztutar commented on HADOOP-4348:
---------------------------------------

@Sanjay
bq.   1. rather then invent new authorization layers use JAAS
bq.   2. we should have used JAAS in the first place when we did UGI.
Exactly.

@Arun, @Sanjay
Yes I also share the wisdom that enabling security manager is an overkill, and the policy file's syntax is ugly.

I am attaching a patch for review which does the job, w/o enabling security manager. The actual security manager's implementation is to delegate all the work to AccessController. The patch uses accesscontroller directly so SM is not explicitly enabled. With this schema, security checks only occur as in the patch. AFAIK, all the core java code use SM, and since it is not enabled, performance does not suffer. Of course we should check this before patch is committed.

Patch introduces a HaddopPolicy implementation, which reads its configuration from an XML file. The schema of the file is relatively simple, and extensible for further JAAS based authorization checks. Unfortunately, the permissions/principals model does not fit perfectly to be configured in the property based configuration files, so I guess we have to include a new format.



> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch, jaas_service_v2.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Status: Open  (was: Patch Available)

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, HADOOP-4348_5_20081206.patch, HADOOP-4348_6_20081209.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Attachment: HADOOP-4348_7_20081210.patch

Fixed javadocs for a couple of classes...

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, HADOOP-4348_5_20081206.patch, HADOOP-4348_6_20081209.patch, HADOOP-4348_7_20081210.patch, HADOOP-4348_7_20081210.patch, HADOOP-4348_7_20081210.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12641984#action_12641984 ] 

Doug Cutting commented on HADOOP-4348:
--------------------------------------

Looking at the implementation I'm reminded of HADOOP-4049.

Hadoop's RPC is implemented as two layers: Server & Client implement a simple transport that sends a stream of serialized instances to a server, where they're processed and then serialized instances are streamed back to the client.  RPC layers methods, parameters, etc. on top of this.  The layering isn't perfect, but it's still worth preserving.  If we wish to replace the transport or the RPC logic someday, then keeping the layers distinct should simplify things.

When a change requires changes to both layers, as this patch does, that raises a red flag, and makes me wonder if it might better be done at one level or the other, rather than spread across both.  HADOOP-4049 started out modifying both layers, but eventually wound up only modifying the RPC layer, and it became a simpler patch for it.

So I wonder if this might also be implemented by adding fields to Invocation, so that each call passes the protocol name and the invoking ugi.  Then the Invoker can check these.  Wouldn't that be simpler & contain the implementation to a single layer?




> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Nigel Daley (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12649427#action_12649427 ] 

Nigel Daley commented on HADOOP-4348:
-------------------------------------

Arun, will this feature have a design doc and test plan?

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Allen Wittenauer (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12638149#action_12638149 ] 

Allen Wittenauer commented on HADOOP-4348:
------------------------------------------

We have two grids.  We only want data to flow one way.  If a user has an account on both grids, we would need to put IP-level protections in place in order to enforce the data flow policy (since only user-level protection would let the data flow both ways). The question comes down to whether Hadoop should do this or should there be a firewall rule in place.

In my mind, this is better handled by Hadoop, since there is a very good chance that you may only want certain users to have these restrictions.  For example, normal users have to abide by the policy, but an automated task that copies data could be free to go any direction.

Additionally, in a homogeneous set of nodes, it may be that the hadoop-site.xml file is the same for everything but the namenode, which can be easily templated by a configuration management system.  I'm highly concerned that putting something like ACLs in the hadoop-site.xml file.  It would greatly increase the amount of work an ops team of many grids would have to do.  If there is a GUI put on top of ACLs (as mentioned by the scheduling team), then I *definitely* do not want it mucking with hadoop-site.xml.  Corruption of that file would likely mean my entire grid goes down vs. users just losing access.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Issue Comment Edited: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Allen Wittenauer (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12638149#action_12638149 ] 

aw edited comment on HADOOP-4348 at 10/8/08 5:16 PM:
-------------------------------------------------------------------

We have two grids.  We only want data to flow one way.  If a user has an account on both grids, we would need to put IP-level protections in place in order to enforce the data flow policy (since only user-level protection would let the data flow both ways). The question comes down to whether Hadoop should do this or should there be a firewall rule in place.

In my mind, this is better handled by Hadoop, since there is a very good chance that you may only want certain users to have these restrictions.  For example, normal users have to abide by the policy, but an automated task that copies data could be free to go any direction.

Additionally, in a homogeneous set of nodes, it may be that the hadoop-site.xml file is the same for everything but the namenode configuration option such that it can be easily templated by a configuration management system.  I'm highly concerned that putting something like ACLs in the hadoop-site.xml file.  It would greatly increase the amount of work an ops team of many grids would have to do.  If there is a GUI put on top of ACLs (as mentioned by the scheduling team), then I *definitely* do not want it mucking with hadoop-site.xml.  Corruption of that file would likely mean my entire grid goes down vs. users just losing access.

      was (Author: aw):
    We have two grids.  We only want data to flow one way.  If a user has an account on both grids, we would need to put IP-level protections in place in order to enforce the data flow policy (since only user-level protection would let the data flow both ways). The question comes down to whether Hadoop should do this or should there be a firewall rule in place.

In my mind, this is better handled by Hadoop, since there is a very good chance that you may only want certain users to have these restrictions.  For example, normal users have to abide by the policy, but an automated task that copies data could be free to go any direction.

Additionally, in a homogeneous set of nodes, it may be that the hadoop-site.xml file is the same for everything but the namenode, which can be easily templated by a configuration management system.  I'm highly concerned that putting something like ACLs in the hadoop-site.xml file.  It would greatly increase the amount of work an ops team of many grids would have to do.  If there is a GUI put on top of ACLs (as mentioned by the scheduling team), then I *definitely* do not want it mucking with hadoop-site.xml.  Corruption of that file would likely mean my entire grid goes down vs. users just losing access.
  
> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Status: Open  (was: Patch Available)

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, HADOOP-4348_5_20081206.patch, HADOOP-4348_6_20081209.patch, HADOOP-4348_7_20081210.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12645128#action_12645128 ] 

Hadoop QA commented on HADOOP-4348:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12393307/jaas_service_v3.patch
  against trunk revision 711350.

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 38 new or modified tests.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    -1 findbugs.  The patch appears to introduce 1 new Findbugs warnings.

    +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

    -1 core tests.  The patch failed core unit tests.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3529/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3529/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3529/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3529/console

This message is automatically generated.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Sanjay Radia (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642230#action_12642230 ] 

Sanjay Radia commented on HADOOP-4348:
--------------------------------------

Doug says:
Sanjay> The best way to represent that service access is when a service proxy object is created - e.g when the connection is established.
>A proxy is not bound to a single connection. Connections are retrieved from a cache each time a call is made. Different proxies may share the same connection, and a single proxy my use different connections for different calls.

Good point,  I missed that.
But a proxy object still represents a session that can potentially need authentication and service level authorization.

Sanjay> I see your argument to be equivalent to arguing against service level authorization and that method level authorization is sufficient.
Doug> No, but we will eventually probably need method-level authorization too, and it would be nice if whatever support we add now also helps then. If we do this in RPC, then we can examine only the protocol name for now, and subsequently add method-level authorization at the same place. So implementing service-level-authentication this way better prepares us for method-level authentication.

We already have method level authorization in HDFS (ie permissions checking). Doing it inside the rpc method invocation would not work since it is very specific to the method in question - one has to check the permissions along the path. Thus in general method level authorization is best done inside the *implementation* of each of the methods and not the rpc layer.

I completely agree with you that the session layer (if we choose to create one) should be done in *one of the two layer* (ipc or rpc) but *not both*.
In my earlier comment I pointed out a way to fix arun's patch to do it at one layer.
But I do agree that it may be worth creating a session layer explicitly to support this rather than just put the code in one of the two layers. 

Service-level authorization can be done in one of 3 places:
# In the method implementation (forces every method to remember to do it).
# in rpc invocation layer (as you suggest).
# in a session layer below

But if this Jira is indeed service-level semantics rather than method-level semantics we should strongly consider creating the session layer since authentication will also need the session layer;   doing authentication on a per-rpc is very very expensive - not incorrect but expensive.
Authentication is often  requires a challenge response. Further there may be encryption negotiation. While this can be done on a per-rpc it is best done at a session level. I will let Kan give us some details on this. But I am merely arguing for some sort of session level context. Authentication is one use case for it. Service authorization is another.

Now here is where the clean abstractions get messy:
I suspect that if we want to take advantage of the java security  APIs such as GSS (Kan please correct me here as I am only casually familiar with the security APIs), then we may want to have one session per connection (not necessary but simpler). This would break your original vision for sharing an ipc instance across many upper layer instances. I think we rework some of the interfaces to allow an upper layer to say that it wants a dedicated ipc-layer instance. We can explore that further if we are in agreement on the rest.





> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Attachment: HADOOP-4348_6_20081209.patch

Updated patch with a few more test cases...

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, HADOOP-4348_5_20081206.patch, HADOOP-4348_6_20081209.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12647674#action_12647674 ] 

Arun C Murthy commented on HADOOP-4348:
---------------------------------------

bq. 1. Using JAAS subject can make it much easier for kerberos based authentication. However UGI is heavily used in HDFS. I guess in the short term we should use both and gradually switch to subject. Instead of passing UGI in connection header, and generating subject from UGI, we may pass subject in header and generate UGI from it.

I propose we switch to passing Subject in the connection header in-lieu of UGI via HADOOP-4656. 

Also, we'll keep UGI as an internal detail so that we do not need to change HDFS for now... maybe at some point we should completely do away with it and change all of HDFS to use Subject too.



> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12647428#action_12647428 ] 

Arun C Murthy commented on HADOOP-4348:
---------------------------------------

Here is a proposal to take this forward so as to use JAAS more extensively for authorization and authentication in Hadoop:

# Change the IPC Server to (specifically Connection) to use java.security.Subject rather than UGI - we do not want the IPC layer to be aware of 2 user-identity systems, we need to pick one and that is JAAS (the standard in Java world)
# Do away with the thread-local UGI and use the thread-local Subject via JAAS.
# Use Subject.doAs and pass in the stored Subject (per connection object) for both checking service-level authorization (as Enis has done in his patch) and for the RPC Handler.
# To keep changes relatively local to the ipc/rpc layer I propose we make UGI a non-public interface (via appropriate javadoc since hdfs and map-reduce need it to be a 'public class') and add a convenience method which converts the 'current Subject' to UGI.
# Future enhancement: HADOOP-4656

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Enis Soztutar (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642224#action_12642224 ] 

Enis Soztutar commented on HADOOP-4348:
---------------------------------------

Sorry for coming late, but I am not in favor of implementing a custom authorization solution. I have been looking into JAAS lately and it seems a perfect fit for our needs. I guess incremental updates to JAAS based authorization and authentication can solve 
this issue, HADOOP-4490, HADOOP-4487 and all of the subissues.
For authorization we can plug current Unix based implementation for now, but once implemented HADOOP-1741 will fit the framework seamlessly.
And for access control, we can implement permission implementations for any type of operation. For example we can have ConnectionPermission to check to IPC connections, and RPCProtocolPermission for RPC protocol name permissions. 
With JAAS we can implement user based access control easily by running the code with the user's privileges, we can check for user's not killing each others job, etc. 

We can achieve authentication/authorization independency. We can just continue to work on access control with unixugi, and introduce kerberos based authentication w/o changing any authentication code at all. 


> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Nigel Daley (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12652131#action_12652131 ] 

Nigel Daley commented on HADOOP-4348:
-------------------------------------

Questions after reading the design doc:
 - what happens when hadoop-policy.xml syntax is bad?
 - what happens when the hadoop-policy.xml file is missing but auth is enabled?
 - what happens when the hadoop-policy.xml file is world/group writable?
 - what new command-line utiltities will be provided?

Additional test cases that come to mind after reading the (thin) design doc:
 - various invalid hadoop-policy.xml syntax
 - various boundary values for user/group/resource in hadoop-policy.xml
 - admin cmd line utility test cases
 - performance when auth turned off and on
 - following combinations (auth on, auth off ) X (client, server)

All for now.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Tsz Wo (Nicholas), SZE (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642323#action_12642323 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-4348:
------------------------------------------------

> rather then invent new authorization layers use JAAS
+1 on using JAAS

It seems that only IPC/RPC traffics are discussed.  How about other channels, like Servlets and plain socket connections?  These channels should be considered in the design.  The corresponding implementations could be done in separated issues.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12655655#action_12655655 ] 

Hudson commented on HADOOP-4348:
--------------------------------

Integrated in Hadoop-trunk #685 (See [http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/685/])
    . Add service-level authorization for Hadoop.


> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, HADOOP-4348_5_20081206.patch, HADOOP-4348_6_20081209.patch, HADOOP-4348_7_20081210.patch, HADOOP-4348_7_20081210.patch, HADOOP-4348_7_20081210.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Kan Zhang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12637361#action_12637361 ] 

Kan Zhang commented on HADOOP-4348:
-----------------------------------

Yes, that's correct.

I'm not too sure about your comment on interfaces. Can you elaborate a bit on that? A concrete example?

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Kan Zhang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12639267#action_12639267 ] 

Kan Zhang commented on HADOOP-4348:
-----------------------------------

Okay Allen, we can only grant you one wish and naturally we picked the easy one. :) We'll use a separate conf file. For IP restriction, you'll have to file a new JIRA.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Status: Patch Available  (was: Open)

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, HADOOP-4348_5_20081206.patch, HADOOP-4348_6_20081209.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Attachment: HADOOP-4348_5_20081206.patch

Updated patch, on Sanjay's feedback I changed ConnectionHeader to not be a 'Configurable' class...

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, HADOOP-4348_5_20081206.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12654765#action_12654765 ] 

Hadoop QA commented on HADOOP-4348:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12395517/HADOOP-4348_5_20081206.patch
  against trunk revision 724578.

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 28 new or modified tests.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

    +1 core tests.  The patch passed core unit tests.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3693/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3693/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3693/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3693/console

This message is automatically generated.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, HADOOP-4348_5_20081206.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Attachment: HADOOP-4348_7_20081210.patch

Updated patch to incorporate the last few bits of Sanjay's feedback... I'm opening separate jiras for some of his comments as indicated there.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, HADOOP-4348_5_20081206.patch, HADOOP-4348_6_20081209.patch, HADOOP-4348_7_20081210.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12640943#action_12640943 ] 

Arun C Murthy commented on HADOOP-4348:
---------------------------------------

Here is a proposal for adding authorization to Hadoop:
 
 # The actual authorization mechanism is pluggable; by default it is config-based. In future we might add LDAP based authorization. The proposal is to add a new AuthorizationManager, along with a factory, which has 2 methods:
 #* {{void authorize(Class<?> protocol, UserGroupInformation ugi) throws AuthorizationException}}
 #* {{void reload()}} - Reload the config on the admin's command via a command-line option.
 # The default config-based authorization system has a new configuration file which lists ACL lists per-protocol, for e.g. this allows the admin to specify which user the DataNode should be running as, ditto for the TaskTracker. This configuration file is similar to the one specified by HADOOP-3698.
 # The authorization check should be made as early as possible i.e. when the Client connects to the Server over RPC. This ensures that the server isn't affected by mis-behaving/malicious clients e.g. DOS attacks.
 # Changes to the RPC system for authorization checking:
 #* Change the RPC Client to restrict it to only one protocol per connection, this ensures that we only need to do the authorization check once - on connection setup. This implies that the Client.ConnectionId to include the protocol, and it has to advertise the protocol on connection setup, in the header, along with the UGI. The Server uses the protocol and the UGI in the header for authorization.
 #* If the authorization fails, the Server immediately disconnects the Client's connection.
 #* Add a new AuthorizedProtocol interface which has one method: {{AuthorizationManager getAuthorizationManager()}} which returns the authorization system's implementation used by the RPC system on connection setup.
 # We assume that the AuthorizationManager implementation is spiffy-enough to inline the authorization on connection-setup from the client i.e. when the Server parses the {{ConnectionHeader}}.

Thoughts?


> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Sanjay Radia (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12641994#action_12641994 ] 

Sanjay Radia commented on HADOOP-4348:
--------------------------------------

Doug do you mean the authorization will be checked on each method invokation?

The problem is that the service authorization should be checked at the ipc layer rather then the rpc layer.
The problem is that this patch stores a pointer to the authorization manager in the rpc layer (at rpc end-point creation time), It would be better to 
let the ipc layer store this pointer. Then everything would happen in the ipc layer which is the correct place to do it.


> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Enis Soztutar (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12652064#action_12652064 ] 

Enis Soztutar commented on HADOOP-4348:
---------------------------------------

Here is my take on the patch :
# I could not understand the difference between AuthorizationEx and AccessControlEx. Doesn't "access control" imply "authorization" ? 
# Why do we set the policy in use in all the places (TT, DN, JT, etc) instead of setting it once in SecurityUtil? 
# instead of passing UGI in connection header, we should pass Subject(and create the UGI from Subject, not the other way). The authentication can be enforced during the creation of Subject which should be done at the client side. Then for kerberos based authentication, we can check the credentials of the Subject. 
# I am not big fan of introducing a custom hadoop-policy file with a new  syntax(in my version of the patch), however I think it is more general, extensible and simpler than this version. 
  - The permission / principal mapping is really a many to many relation, so it does not naturally fit in property name/value type of configuration.
  - having Service/ServiceAuthorizationManager/PolicyProvider/HDFSPolicyProvider/MapReducePolicyProvider/ classes just for having property name/value style configuration seems to be an overkill. 
  - For introducing a new type of permission(not a service level authorization), we should introduce many classes in this design. Moreover ConfiguredPolicy does not capture future authentication points (such as job level, file access, ... )
  -  It is not clear how we can configure other types of (future) permissions with this schema. For example if we would like to introduce permissions for each method in protocols, we cannot easily configure them. 

minor issues : 
# hadoop.security.authorization should default to false in hadoop-default.xml.
# system.err output in SecurityUtil.setPolicy seems redundant. 
# for setting acl list from conf, we can prefix group names with ":" similar to unix style. For example <name> foo, bar, :hadoop </name> meaning to give permissions for users foo, bar and group hadoop 
# some log.info can be changed to log.debug in ipc.Server with guards. In case of dos attacks lots of log will be output if the server runs in info mode. 


> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Resolution: Fixed
        Status: Resolved  (was: Patch Available)

I just committed this.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, HADOOP-4348_5_20081206.patch, HADOOP-4348_6_20081209.patch, HADOOP-4348_7_20081210.patch, HADOOP-4348_7_20081210.patch, HADOOP-4348_7_20081210.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Kan Zhang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642330#action_12642330 ] 

Kan Zhang commented on HADOOP-4348:
-----------------------------------

> But if we have to implement an SSL-like layer ourselves, it might be simpler to implement it as a session layered on a binary message-exchange API than as a non-blocking socket.

Yes, you could be right. But the socket interface is well understood and fits in existing code nicely.

> Non-blocking i/o is an awkward coding paradigm, since you have to assemble messages (including headers) incrementally and maintain control state explicitly.

Yes, we do that explicitly in Server.Connection. With encryption, we'll have another layer of headers and buffers to manage. If we can encapsulate the encryption layer into a socket, we don't have to clog the Server code further.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Enis Soztutar (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Enis Soztutar updated HADOOP-4348:
----------------------------------

    Status: Patch Available  (was: Open)

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Kan Zhang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12637341#action_12637341 ] 

Kan Zhang commented on HADOOP-4348:
-----------------------------------

> Having service-level authorization may save some resources by avoiding elaborate downstream checking.

Just to clarify, I meant resources may be saved in the cases of dealing with non-users. I don't mean replacing downstream checking with service-level authorization.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Attachment: HADOOP-4348_3_20081204.patch

Updated patch.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12652287#action_12652287 ] 

Arun C Murthy commented on HADOOP-4348:
---------------------------------------

Thanks for the review Enis, responses inline:

bq. I could not understand the difference between AuthorizationEx and AccessControlEx. Doesn't "access control" imply "authorization" ?
AuthorizationException is used to hide the stack trace from the user, a security feature.

bq. Why do we set the policy in use in all the places (TT, DN, JT, etc) instead of setting it once in SecurityUtil?
The default is set in SecurityUtil, it's set in various places with the appropriate PolicyProvider. We need different PolicyProviders since there are non-public protocols which need to be accessed e.g. JobSubmissionProtocol.

bq. instead of passing UGI in connection header, we should pass Subject(and create the UGI from Subject, not the other way). The authentication can be enforced during the creation of Subject which should be done at the client side. Then for kerberos based authentication, we can check the credentials of the Subject.
I agree, however I propose we do that separately once we change all of NN to use User/Group principals - for now this will keep the change smaller.

bq. I am not big fan of introducing a custom hadoop-policy file with a new syntax(in my version of the patch), however I think it is more general, extensible and simpler than this version.
This patch is just using the ACLs already in place for Queues introduced via HADOOP-3698, thus not introducing any new syntax.

I fix all the other minor nits (logging, default values) in the next patch. Thanks!

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Issue Comment Edited: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Enis Soztutar (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12653346#action_12653346 ] 

enis edited comment on HADOOP-4348 at 12/4/08 8:38 AM:
----------------------------------------------------------------

I have a few questions regarding the patch :

# Is there any advantage of doing authentication check in RPC rather than ipc.Server? If this one is better, then shouldn't ipc.Server#authorize() be an abstract method rather than one with empty body. 

# I understand that current configuration of policy is inline with queue configuration. However I am not comfortable with Service and PolicyProvider classes only holding key name ->protocol name mappings. I propose we 
## introduce Configuration#keys() method returning key names in the configuration.
## configure the acls like :
{code} 
<property>
  <name>security.connectionPermission.org.apache.hadoop.mapred.JobSubmissionProtocol</name>
  <value>*</value>
  <description></description>
</property>
{code} 
## search for keys starting with prefix "security.connectionPermission" and load them. Sure, the whole configuration keys will be searched, which is in the order of 100s, but I guess it will be acceptable. Moreover we can use configuration w/o default resources (see below).

# In ConfiguredPolicy#refresh() with every reload request conf.resources arraylist gets appended which is a leak. I guess we should create a new Configuration w/o loading default resources and add hadoop-policy.xml to it. 
Alternatively we can get rid of hadoop-policy.xml, and configure everything with hadoop-site.xml (core-site.xml), and use Configuration#reloadConfiguration() in ConfiguredPolicy#refresh(). 

Minor things : 
# mradmin and refresh-auth-policy seems good. 
# the LOG.info() in ConfiguredPolicy#implies() seems redundant. 

      was (Author: enis):
    I have a few questions regarding the patch :

#Is there any advantage of doing authentication check in RPC rather than ipc.Server? If this one is better, then shouldn't ipc.Server#authorize() be an abstract method rather than one with empty body. 

# I understand that current configuration of policy is inline with queue configuration. However I am not comfortable with Service and PolicyProvider classes only holding key name ->protocol name mappings. I propose we 
## introduce Configuration#keys() method returning key names in the configuration.
## configure the acls like :
{code} 
<property>
  <name>security.connectionPermission.org.apache.hadoop.mapred.JobSubmissionProtocol</name>
  <value>*</value>
  <description></description>
</property>
{code} 
## search for keys starting with prefix "security.connectionPermission" and load them. Sure, the whole configuration keys will be searched, which is in the order of 100s, but I guess it will be acceptable. Moreover we can use configuration w/o default resources (see below).

#In ConfiguredPolicy#refresh() with every reload request conf.resources arraylist gets appended which is a leak. I guess we should create a new Configuration w/o loading default resources and add hadoop-policy.xml to it. 
Alternatively we can get rid of hadoop-policy.xml, and configure everything with hadoop-site.xml (core-site.xml), and use Configuration#reloadConfiguration() in ConfiguredPolicy#refresh(). 

Minor things : 
#mradmin and refresh-auth-policy seems good. 
# the LOG.info() in ConfiguredPolicy#implies() seems redundant. 
  
> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12641660#action_12641660 ] 

Doug Cutting commented on HADOOP-4348:
--------------------------------------

This sounds like a reasonable design to me.  We might call it "ServiceAuthorizationManager" or somesuch, since it's scope is limited to permitting or denying access to an entire service, rather than more fine-grained method or data-based authorization (e.g., permission to kill jobs launched by folks in the same group).

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Attachment: ServiceLevelAuthorization.pdf

Design doc and test plan attached.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Status: Patch Available  (was: Open)

Trying hudson...

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12652289#action_12652289 ] 

Arun C Murthy commented on HADOOP-4348:
---------------------------------------

Nigel - responses inline:

bq. what happens when hadoop-policy.xml syntax is bad?
Illegal schema is handled by Configuration itself and the ACL parser handles incorrect user/groups.

bq. what happens when the hadoop-policy.xml file is missing but auth is enabled?
Configuration will fail if it cannot 'load' the resource.

bq. what happens when the hadoop-policy.xml file is world/group writable?
Hmm... it's assumed that the cluster configs are handled correctly by the admin.

bq. what new command-line utiltities will be provided?
I'll update the spec - the proposal is to add a '-refresh-auth-policy' switch to dfsadmin, and a similar one to a new command 'mradmin'.
$ bin/hadoop dfsadmin -refresh-auth-policy
$ bin/hadoop mradmin -refresh-auth-policy

I'll update the test plan too.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Attachment: ServiceLevelAuthorization.pdf

Update document incorporating Nigel's feedback. Thanks Nigel!

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642048#action_12642048 ] 

Doug Cutting commented on HADOOP-4348:
--------------------------------------

Sanjay> The best way to represent that service access is when a service proxy object is created - e.g when the connection is established.

A proxy is not bound to a single connection.  Connections are retrieved from a cache each time a call is made. Different proxies may share the same connection, and a single proxy my use different connections for different calls.

Sanjay> We could share multiple service sessions in a single connection but that complexity is not worth it.

It would be simpler to implement this way, not more complex.  In HADOOP-4049 it was considerably simpler to pass extra data by modifying the RPC code than Client/Server.  That's my primary motivation here: to keep the code simple.  So unless there's a reason why we must authorize per connection rather than per request, it would be easier to authorize requests and would better compartmentalize the code.  There are some performance implications.  Authorizing per request will use fewer connections but perform more authorizations.  I don't know whether this is significant.  I expect that ACLs will be cached, and that authorization will not be too expensive, but that remains to be seen.  So performance may provide a motivation to authorize per connection.  But let's not prematurely optimize.

Sanjay> I see your argument to be equivalent to arguing against service level authorization and that method level authorization is sufficient.

No, but we will eventually probably need method-level authorization too, and it would be nice if whatever support we add now also helps then.  If we do this in RPC, then we can examine only the protocol name for now, and subsequently add method-level authorization at the same place.  So implementing service-level-authentication this way better prepares us for method-level authentication.

Sanjay> Would you be happier if we created an intermediate layer, say rpc-session, in between. I am not seriously suggesting we do that.

We have two layers today.  We could add this at either layer.  It would be cleaner to add it only at one layer, not mixed between the two, as in the current patch.  It would be simpler to add it to the RPC layer, and I have yet to hear a strong reason why that would be wrong.  That's all I'm saying.


> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Kan Zhang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12637139#action_12637139 ] 

Kan Zhang commented on HADOOP-4348:
-----------------------------------

> Is the part about specifying users/groups and allowing access only to them similar to HADOOP-3698, which was fixed for Hadoop 0.19 ? 

Thanks for pointing it out. They are meant to complement each other. Service-level authorization is done at the outset when a client connection is first set up. It only verifies if the client is a user of the cluster and nothing more. Once the connection is set up, e.g. in the case of RPC, this checking is bypassed and each subsequent RPC call is only subject to downstream authorization checking like HADOOP-3698. Having service-level authorization may save some resources by avoiding elaborate downstream checking. If not, a cluster should have the option to turn it off. This feature was originally proposed to ease firewall management for clusters.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Kan Zhang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12637990#action_12637990 ] 

Kan Zhang commented on HADOOP-4348:
-----------------------------------

No. I think IP restrictions are best dealt with at the firewall layer. In fact, one of the motivations for this feature is to remove the need for firewalls between clusters co-located in the same security zone.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Kan Zhang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642263#action_12642263 ] 

Kan Zhang commented on HADOOP-4348:
-----------------------------------

> Also, does Kerberos really require a separate TCP connection per client, or rather does it require separate handshakes and encryptions per client?  We could establish a separate session per client over a shared connection...

Yes. we could. Kerberos itself has no concept of transport. However, it does require separate handshakes and encryptions per client.

> Or do we want to build on SSL-like standards (authenticated byte streams) and layer per-user message exchange on that?

This has the nice property of hiding authentication and encryption services from the upper layer. To the rpc layer it's the same as using a normal socket connection.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Enis Soztutar (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Enis Soztutar updated HADOOP-4348:
----------------------------------

    Attachment: jaas_service_v3.patch

Attaching updated patch as a candidate for submission. Differences from previous include improved javadoc, test cases. 

Here are some points from the patch : 
- introduce HadoopPrincipal, UserPrincipal, GroupPrincipal encapsulating user and group names. 
- introduce HadoopPermission as a general permission superclass
- introduce IPCProtocolPermission to indicate a permission for and ipc call of a specific protocol(interface)
- introduce ipc.ConnectionHeader, send with once with every ipc connection
- ipc.Client, ipc.Server is changed for new header format. ipc.Server now calls authorize() for every connection if enabled. 
- proxy methods in ipc.RPC is deprecated to send UUI 
- hadoop daemons(DN, TT, JT, etc) now login as normal users and send UUI for ipc. 
- conf option "mapred.acl.enabled" is deprecated in favor of "hadoop.authorization.enabled". Later we will change queue acls to jaas-based auth.
- introduced HadoopPolicy as a Policy impl. Reads XML based conf from hadoop-policy.xml 
- introduced SecurityUtil for general purpose security checks.
- added a getSubject() method to UserGroupInformation. 
- TestPermission is renamed to TestFSPermissions


Reviews are more than welcome. 

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Arun C Murthy updated HADOOP-4348:
----------------------------------

    Attachment: HADOOP-4348_4_20081205.patch

Updated patch with test cases, passes all local tests on my machine.

Thanks for the review Nicholas! I've incorporated most of your comments in this patch.

Wrt the refresh command, I'm not sure it makes sense to require super-user privs. To my mind, any operation performed *on* the file-system (quotas, upgrade, safemode etc.) require super-user privs; however, changing ACLs on services is more of an administrative job... also there really isn't a notion of super-user for the map-reduce cluster. Maybe we could keep it as is for now and change it in the future?




> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: security
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, HADOOP-4348_1_20081201.patch, HADOOP-4348_2_20081202.patch, HADOOP-4348_3_20081204.patch, HADOOP-4348_4_20081205.patch, jaas_service_v1.patch, jaas_service_v2.patch, jaas_service_v3.patch, ServiceLevelAuthorization.pdf, ServiceLevelAuthorization.pdf
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642276#action_12642276 ] 

Doug Cutting commented on HADOOP-4348:
--------------------------------------

> To the rpc layer it's the same as using a normal socket connection.

But SSL may not be useful to us.  Does Java's SSLServerSocket implement non-blocking i/o?  If it doesn't, then we may gain little by using SSL, since we'd have to re-implement it.  It might be easier to encrypt and decrypt individual messages, implement handshakes, etc. on top of a binary message exchange protocol than to intermix that logic with the non-blocking client and server message exchange implementation.

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch, jaas_service_v1.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-4348) Adding service-level authorization to Hadoop

Posted by "Kan Zhang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12642218#action_12642218 ] 

Kan Zhang commented on HADOOP-4348:
-----------------------------------

> It would be simpler to add it to the RPC layer, and I have yet to hear a strong reason why that would be wrong.

There is nothing wrong about it, but to have effective authorization we can't rely on ugi's sent by the clients because clients may lie. They may authenticate as one user at the beginning of a connection and subsequently use a different ugi when making calls. So at the server side, you either authenticate the user for each call (which may not always be possible since many authentication protocols require more than one message passing to authenticate a user) or authenticate the user at the beginning of a connection and associate an authenticated user ID with the connection, such that all subsequent authorization checking on calls will use that authenticated user ID. The latter implies one user per connection.  

> Adding service-level authorization to Hadoop
> --------------------------------------------
>
>                 Key: HADOOP-4348
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4348
>             Project: Hadoop Core
>          Issue Type: New Feature
>            Reporter: Kan Zhang
>            Assignee: Arun C Murthy
>             Fix For: 0.20.0
>
>         Attachments: HADOOP-4348_0_20081022.patch
>
>
> Service-level authorization is the initial checking done by a Hadoop service to find out if a connecting client is a pre-defined user of that service. If not, the connection or service request will be declined. This feature allows services to limit access to a clearly defined group of users. For example, service-level authorization allows "world-readable" files on a HDFS cluster to be readable only by the pre-defined users of that cluster, not by anyone who can connect to the cluster. It also allows a M/R cluster to define its group of users so that only those users can submit jobs to it.
> Here is an initial list of requirements I came up with.
>     1. Users of a cluster is defined by a flat list of usernames and groups. A client is a user of the cluster if and only if her username is listed in the flat list or one of her groups is explicitly listed in the flat list. Nested groups are not supported.
>     2. The flat list is stored in a conf file and pushed to every cluster node so that services can access them.
>     3. Services will monitor the modification of the conf file periodically (5 mins interval by default) and reload the list if needed.
>     4. Checking against the flat list is done as early as possible and before any other authorization checking. Both HDFS and M/R clusters will implement this feature.
>     5. This feature can be switched off and is off by default.
> I'm aware of interests in pulling user data from LDAP. For this JIRA, I suggest we implement it using a conf file. Additional data sources may be supported via new JIRA's.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.