You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Tony Dean <To...@sas.com> on 2012/06/09 00:11:53 UTC

hbase client security (cluster is secure)

Hi all,

I have created a hadoop/hbase/zookeeper cluster that is secured and verified.  Now a simple test is to connect an hbase client (e.g, shell) to see its behavior.

Well, I get the following message on the hbase master: AccessControlException: authentication is required.

Looking at the code it appears that the client passed "simple" authentication byte in the rpc header.  Why, I don't know?

My client configuration is as follows:

hbase-site.xml:
   <property>
      <name>hbase.security.authentication</name>
      <value>kerberos</value>
   </property>

   <property>
      <name>hbase.rpc.engine</name>
      <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
   </property>

hbase-env.sh:
export HBASE_OPTS="$HBASE_OPTS -Djava.security.auth.login.config=/usr/local/hadoop/hbase/conf/hbase.jaas"

hbase.jaas:
Client {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=false
   useTicketCache=true
 };

I issue kinit for the client I want to use.  Then invoke hbase shell.  I simply issue list and see the error on the server.

Any ideas what I am doing wrong?

Thanks so much!


_____________________________________________
From: Tony Dean
Sent: Tuesday, June 05, 2012 5:41 PM
To: common-user@hadoop.apache.org
Subject: hadoop file permission 1.0.3 (security)


Can someone detail the options that are available to set file permissions at the hadoop and os level?  Here's what I have discovered thus far:

dfs.permissions  = true|false (works as advertised)
dfs.supergroup = supergroup (works as advertised)
dfs.umaskmode = umask (I believe this should be used in lieu of dfs.umask) - it appears to set the permissions for files created in hadoop fs (minus execute permission).
why was dffs.umask deprecated?  what's difference between the 2.
dfs.datanode.data.dir.perm = perm (not sure this is working at all?) I thought it was supposed to set permission on blks at the os level.

Are there any other file permission configuration properties?

What I would really like to do is set data blk file permissions at the os level so that the blocks can be locked down from all users except super and supergroup, but allow it to be used accessed by hadoop API as specified by hdfs permissions.  Is this possible?

Thanks.


Tony Dean
SAS Institute Inc.
Senior Software Developer
919-531-6704

 << OLE Object: Picture (Device Independent Bitmap) >>




RE: hbase client security (cluster is secure)

Posted by Tony Dean <To...@sas.com>.
Anyone have any direction for me on this matter?  It's probably something simple that I'm doing wrong, but I can't figure it out.  Thanks.

-----Original Message-----
From: Tony Dean 
Sent: Saturday, June 09, 2012 4:51 PM
To: 'Harsh J'; user@hbase.apache.org
Subject: RE: hbase client security (cluster is secure)

Hi Harsh,

Thanks for re-routing to HBase user-group. ;-)

I followed the same steps as you, at least I tried to.

My cluster appears to be working and I outlined my client configuration below.

BTW: I knew that the hbase master authenticated to zookeeper via quorum ensemble in order to find where hadoop dfs lives, but I didn't realize the region servers and hbase clients also needed to authenticate to zookeeper.  Explain?

Anyway, here are the traces that I collected.

HBase master:

12/06/09 16:40:36 DEBUG security.HBaseSaslRpcClient: Will send token of size 50 from initSASLContext.
12/06/09 16:40:36 DEBUG security.HBaseSaslRpcClient: SASL client context established. Negotiated QoP: auth
12/06/09 16:40:47 WARN ipc.HBaseServer: IPC Server listener on 60000: readAndProcess threw exception org.apache.hadoop.security.AccessControlException: Authentication is required. Count of bytes read: 0
org.apache.hadoop.security.AccessControlException: Authentication is required
        at org.apache.hadoop.hbase.ipc.SecureServer$SecureConnection.readAndProcess(SecureServer.java:414)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:703)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:495)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:470)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:619)
12/06/09 16:40:48 WARN ipc.HBaseServer: IPC Server listener on 60000: readAndProcess threw exception org.apache.hadoop.security.AccessControlException: Authentication is required. Count of bytes read: 0
org.apache.hadoop.security.AccessControlException: Authentication is required
        at org.apache.hadoop.hbase.ipc.SecureServer$SecureConnection.readAndProcess(SecureServer.java:414)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:703)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:495)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:470)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:619)
12/06/09 16:40:49 WARN ipc.HBaseServer: IPC Server listener on 60000: readAndProcess threw exception org.apache.hadoop.security.AccessControlException: Authentication is required. Count of bytes read: 0
org.apache.hadoop.security.AccessControlException: Authentication is required
        at org.apache.hadoop.hbase.ipc.SecureServer$SecureConnection.readAndProcess(SecureServer.java:414)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:703)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:495)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:470)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:619)


zookeeper:

12/06/09 16:40:47 DEBUG server.ZooKeeperServer: Responding to client SASL token.
12/06/09 16:40:47 DEBUG server.ZooKeeperServer: Size of client SASL token: 67
Krb5Context.unwrap: token=[60 41 06 09 2a 86 48 86 f7 12 01 02 02 02 01 11 00 ff ff ff ff 65 66 6b 58 fd f1 6b ec 27 53 22 23 5d 7b 03 33 0b e3 2d 7f d3 a9 13 62 01 01 00 00 73 61 73 70 61 64 40 4e 41 2e 53 41 53 2e 43 4f 4d 01 ]
Krb5Context.unwrap: data=[01 01 00 00 73 61 73 70 61 64 40 4e 41 2e 53 41 53 2e 43 4f 4d ]
12/06/09 16:40:47 INFO auth.SaslServerCallbackHandler: Successfully authenticated client: authenticationID=saspad@NA.SAS.COM;  authorizationID=saspad@NA.SAS.COM.
12/06/09 16:40:47 INFO auth.SaslServerCallbackHandler: Setting authorizedID: saspad
12/06/09 16:40:47 INFO server.ZooKeeperServer: adding SASL authorization for authorizationID: saspad
12/06/09 16:40:50 DEBUG server.FinalRequestProcessor: Processing request:: sessionid:0x137d2f4f3350005 type:ping cxid:0xfffffffffffffffe zxid:0xffffffffffff

It looks like my client identity, "saspad", flowed across the wire successfully.


Thanks again for taking a look at this.






-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com]
Sent: Saturday, June 09, 2012 11:26 AM
To: user@hbase.apache.org
Cc: Tony Dean
Subject: Re: hbase client security (cluster is secure)

Hi again Tony,

Moving this to user@hbase.apache.org (bcc'd common-user@hadoop.apache.org). Please use the right user group lists for best responses. I've added you to CC in case you aren't subscribed to the HBase user lists.

Can you share the whole error/stacktrace-if-any/logs you get at the HMaster that says AccessControlException? Would be helpful to see what particular class/operation logged it to help you specifically.

I have an instance of 0.92-based cluster running after having followed http://hbase.apache.org/book.html#zookeeper and https://ccp.cloudera.com/display/CDH4DOC/HBase+Security+Configuration
and it seems to work well enough with auth enabled.

On Sat, Jun 9, 2012 at 3:41 AM, Tony Dean <To...@sas.com> wrote:
> Hi all,
>
> I have created a hadoop/hbase/zookeeper cluster that is secured and verified.  Now a simple test is to connect an hbase client (e.g, shell) to see its behavior.
>
> Well, I get the following message on the hbase master: AccessControlException: authentication is required.
>
> Looking at the code it appears that the client passed "simple" authentication byte in the rpc header.  Why, I don't know?
>
> My client configuration is as follows:
>
> hbase-site.xml:
>   <property>
>      <name>hbase.security.authentication</name>
>      <value>kerberos</value>
>   </property>
>
>   <property>
>      <name>hbase.rpc.engine</name>
>      <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
>   </property>
>
> hbase-env.sh:
> export HBASE_OPTS="$HBASE_OPTS -Djava.security.auth.login.config=/usr/local/hadoop/hbase/conf/hbase.jaas"
>
> hbase.jaas:
> Client {
>   com.sun.security.auth.module.Krb5LoginModule required
>   useKeyTab=false
>   useTicketCache=true
>  };
>
> I issue kinit for the client I want to use.  Then invoke hbase shell.  I simply issue list and see the error on the server.
>
> Any ideas what I am doing wrong?
>
> Thanks so much!
>
>
> _____________________________________________
> From: Tony Dean
> Sent: Tuesday, June 05, 2012 5:41 PM
> To: common-user@hadoop.apache.org
> Subject: hadoop file permission 1.0.3 (security)
>
>
> Can someone detail the options that are available to set file permissions at the hadoop and os level?  Here's what I have discovered thus far:
>
> dfs.permissions  = true|false (works as advertised) dfs.supergroup = 
> supergroup (works as advertised) dfs.umaskmode = umask (I believe this 
> should be used in lieu of dfs.umask) - it appears to set the permissions for files created in hadoop fs (minus execute permission).
> why was dffs.umask deprecated?  what's difference between the 2.
> dfs.datanode.data.dir.perm = perm (not sure this is working at all?) I thought it was supposed to set permission on blks at the os level.
>
> Are there any other file permission configuration properties?
>
> What I would really like to do is set data blk file permissions at the os level so that the blocks can be locked down from all users except super and supergroup, but allow it to be used accessed by hadoop API as specified by hdfs permissions.  Is this possible?
>
> Thanks.
>
>
> Tony Dean
> SAS Institute Inc.
> Senior Software Developer
> 919-531-6704
>
>  << OLE Object: Picture (Device Independent Bitmap) >>
>
>
>



--
Harsh J



Re: hbase client security (cluster is secure)

Posted by Andrew Purtell <ap...@apache.org>.
This is a bit of an X-Y discussion. The error here is not ZooKeeper
related in any way:

> 12/06/09 16:40:47 WARN ipc.HBaseServer: IPC Server listener on 60000: readAndProcess threw exception org.apache.hadoop.security.AccessControlException: Authentication is required. Count of bytes read: 0
> org.apache.hadoop.security.AccessControlException: Authentication is required

This says the Hadoop RPC client is configured for AuthMethod.SIMPLE
but what the HBase master wants is AuthMethod.KERBEROS. See
http://hbase.apache.org/book/security.html and insure the server and
client side configurations conform before we should proceed further.

Separately,

> BTW: I knew that the hbase master authenticated to zookeeper via quorum ensemble in order to find where hadoop dfs lives, but I didn't realize the region servers and hbase clients also needed to authenticate to zookeeper.  Explain?

The master and regionservers authenticate to ZooKeeper to access
protected znodes that serve a variety of functions. If clients could
access them, such clients could subvert system functions including
security. So we set up typically a "hbase" service principal and only
that principal can access those protected znodes. Thus all HBase
daemons must authenticate with ZooKeeper using that principal.

Clients do not need to authenticate with ZooKeeper. The znodes which
clients must access are not sensitive and do not have restrictive
ACLs.  (Although clients wanting to take administrative action
currently must, see https://issues.apache.org/jira/browse/HBASE-6068.)


On Sat, Jun 9, 2012 at 1:51 PM, Tony Dean <To...@sas.com> wrote:
> Hi Harsh,
>
> Thanks for re-routing to HBase user-group. ;-)
>
> I followed the same steps as you, at least I tried to.
>
> My cluster appears to be working and I outlined my client configuration below.
>
> BTW: I knew that the hbase master authenticated to zookeeper via quorum ensemble in order to find where hadoop dfs lives, but I didn't realize the region servers and hbase clients also needed to authenticate to zookeeper.  Explain?
>
> Anyway, here are the traces that I collected.
>
> HBase master:
>
> 12/06/09 16:40:36 DEBUG security.HBaseSaslRpcClient: Will send token of size 50 from initSASLContext.
> 12/06/09 16:40:36 DEBUG security.HBaseSaslRpcClient: SASL client context established. Negotiated QoP: auth
> 12/06/09 16:40:47 WARN ipc.HBaseServer: IPC Server listener on 60000: readAndProcess threw exception org.apache.hadoop.security.AccessControlException: Authentication is required. Count of bytes read: 0
> org.apache.hadoop.security.AccessControlException: Authentication is required
>        at org.apache.hadoop.hbase.ipc.SecureServer$SecureConnection.readAndProcess(SecureServer.java:414)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:703)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:495)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:470)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:619)
> 12/06/09 16:40:48 WARN ipc.HBaseServer: IPC Server listener on 60000: readAndProcess threw exception org.apache.hadoop.security.AccessControlException: Authentication is required. Count of bytes read: 0
> org.apache.hadoop.security.AccessControlException: Authentication is required
>        at org.apache.hadoop.hbase.ipc.SecureServer$SecureConnection.readAndProcess(SecureServer.java:414)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:703)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:495)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:470)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:619)
> 12/06/09 16:40:49 WARN ipc.HBaseServer: IPC Server listener on 60000: readAndProcess threw exception org.apache.hadoop.security.AccessControlException: Authentication is required. Count of bytes read: 0
> org.apache.hadoop.security.AccessControlException: Authentication is required
>        at org.apache.hadoop.hbase.ipc.SecureServer$SecureConnection.readAndProcess(SecureServer.java:414)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:703)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:495)
>        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:470)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:619)
>
>
> zookeeper:
>
> 12/06/09 16:40:47 DEBUG server.ZooKeeperServer: Responding to client SASL token.
> 12/06/09 16:40:47 DEBUG server.ZooKeeperServer: Size of client SASL token: 67
> Krb5Context.unwrap: token=[60 41 06 09 2a 86 48 86 f7 12 01 02 02 02 01 11 00 ff ff ff ff 65 66 6b 58 fd f1 6b ec 27 53 22 23 5d 7b 03 33 0b e3 2d 7f d3 a9 13 62 01 01 00 00 73 61 73 70 61 64 40 4e 41 2e 53 41 53 2e 43 4f 4d 01 ]
> Krb5Context.unwrap: data=[01 01 00 00 73 61 73 70 61 64 40 4e 41 2e 53 41 53 2e 43 4f 4d ]
> 12/06/09 16:40:47 INFO auth.SaslServerCallbackHandler: Successfully authenticated client: authenticationID=saspad@NA.SAS.COM;  authorizationID=saspad@NA.SAS.COM.
> 12/06/09 16:40:47 INFO auth.SaslServerCallbackHandler: Setting authorizedID: saspad
> 12/06/09 16:40:47 INFO server.ZooKeeperServer: adding SASL authorization for authorizationID: saspad
> 12/06/09 16:40:50 DEBUG server.FinalRequestProcessor: Processing request:: sessionid:0x137d2f4f3350005 type:ping cxid:0xfffffffffffffffe zxid:0xffffffffffff
>
> It looks like my client identity, "saspad", flowed across the wire successfully.
>
>
> Thanks again for taking a look at this.
>
>
>
>
>
>
> -----Original Message-----
> From: Harsh J [mailto:harsh@cloudera.com]
> Sent: Saturday, June 09, 2012 11:26 AM
> To: user@hbase.apache.org
> Cc: Tony Dean
> Subject: Re: hbase client security (cluster is secure)
>
> Hi again Tony,
>
> Moving this to user@hbase.apache.org (bcc'd common-user@hadoop.apache.org). Please use the right user group lists for best responses. I've added you to CC in case you aren't subscribed to the HBase user lists.
>
> Can you share the whole error/stacktrace-if-any/logs you get at the HMaster that says AccessControlException? Would be helpful to see what particular class/operation logged it to help you specifically.
>
> I have an instance of 0.92-based cluster running after having followed http://hbase.apache.org/book.html#zookeeper and https://ccp.cloudera.com/display/CDH4DOC/HBase+Security+Configuration
> and it seems to work well enough with auth enabled.
>
> On Sat, Jun 9, 2012 at 3:41 AM, Tony Dean <To...@sas.com> wrote:
>> Hi all,
>>
>> I have created a hadoop/hbase/zookeeper cluster that is secured and verified.  Now a simple test is to connect an hbase client (e.g, shell) to see its behavior.
>>
>> Well, I get the following message on the hbase master: AccessControlException: authentication is required.
>>
>> Looking at the code it appears that the client passed "simple" authentication byte in the rpc header.  Why, I don't know?
>>
>> My client configuration is as follows:
>>
>> hbase-site.xml:
>>   <property>
>>      <name>hbase.security.authentication</name>
>>      <value>kerberos</value>
>>   </property>
>>
>>   <property>
>>      <name>hbase.rpc.engine</name>
>>      <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
>>   </property>
>>
>> hbase-env.sh:
>> export HBASE_OPTS="$HBASE_OPTS -Djava.security.auth.login.config=/usr/local/hadoop/hbase/conf/hbase.jaas"
>>
>> hbase.jaas:
>> Client {
>>   com.sun.security.auth.module.Krb5LoginModule required
>>   useKeyTab=false
>>   useTicketCache=true
>>  };
>>
>> I issue kinit for the client I want to use.  Then invoke hbase shell.  I simply issue list and see the error on the server.
>>
>> Any ideas what I am doing wrong?
>>
>> Thanks so much!
>>
>>
>> _____________________________________________
>> From: Tony Dean
>> Sent: Tuesday, June 05, 2012 5:41 PM
>> To: common-user@hadoop.apache.org
>> Subject: hadoop file permission 1.0.3 (security)
>>
>>
>> Can someone detail the options that are available to set file permissions at the hadoop and os level?  Here's what I have discovered thus far:
>>
>> dfs.permissions  = true|false (works as advertised) dfs.supergroup =
>> supergroup (works as advertised) dfs.umaskmode = umask (I believe this
>> should be used in lieu of dfs.umask) - it appears to set the permissions for files created in hadoop fs (minus execute permission).
>> why was dffs.umask deprecated?  what's difference between the 2.
>> dfs.datanode.data.dir.perm = perm (not sure this is working at all?) I thought it was supposed to set permission on blks at the os level.
>>
>> Are there any other file permission configuration properties?
>>
>> What I would really like to do is set data blk file permissions at the os level so that the blocks can be locked down from all users except super and supergroup, but allow it to be used accessed by hadoop API as specified by hdfs permissions.  Is this possible?
>>
>> Thanks.
>>
>>
>> Tony Dean
>> SAS Institute Inc.
>> Senior Software Developer
>> 919-531-6704
>>
>>  << OLE Object: Picture (Device Independent Bitmap) >>
>>
>>
>>
>
>
>
> --
> Harsh J
>
>



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet
Hein (via Tom White)

RE: hbase client security (cluster is secure)

Posted by Tony Dean <To...@sas.com>.
Hi Harsh,

Thanks for re-routing to HBase user-group. ;-)

I followed the same steps as you, at least I tried to.

My cluster appears to be working and I outlined my client configuration below.

BTW: I knew that the hbase master authenticated to zookeeper via quorum ensemble in order to find where hadoop dfs lives, but I didn't realize the region servers and hbase clients also needed to authenticate to zookeeper.  Explain?

Anyway, here are the traces that I collected.

HBase master:

12/06/09 16:40:36 DEBUG security.HBaseSaslRpcClient: Will send token of size 50 from initSASLContext.
12/06/09 16:40:36 DEBUG security.HBaseSaslRpcClient: SASL client context established. Negotiated QoP: auth
12/06/09 16:40:47 WARN ipc.HBaseServer: IPC Server listener on 60000: readAndProcess threw exception org.apache.hadoop.security.AccessControlException: Authentication is required. Count of bytes read: 0
org.apache.hadoop.security.AccessControlException: Authentication is required
        at org.apache.hadoop.hbase.ipc.SecureServer$SecureConnection.readAndProcess(SecureServer.java:414)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:703)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:495)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:470)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:619)
12/06/09 16:40:48 WARN ipc.HBaseServer: IPC Server listener on 60000: readAndProcess threw exception org.apache.hadoop.security.AccessControlException: Authentication is required. Count of bytes read: 0
org.apache.hadoop.security.AccessControlException: Authentication is required
        at org.apache.hadoop.hbase.ipc.SecureServer$SecureConnection.readAndProcess(SecureServer.java:414)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:703)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:495)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:470)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:619)
12/06/09 16:40:49 WARN ipc.HBaseServer: IPC Server listener on 60000: readAndProcess threw exception org.apache.hadoop.security.AccessControlException: Authentication is required. Count of bytes read: 0
org.apache.hadoop.security.AccessControlException: Authentication is required
        at org.apache.hadoop.hbase.ipc.SecureServer$SecureConnection.readAndProcess(SecureServer.java:414)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:703)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:495)
        at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:470)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:619)


zookeeper:

12/06/09 16:40:47 DEBUG server.ZooKeeperServer: Responding to client SASL token.
12/06/09 16:40:47 DEBUG server.ZooKeeperServer: Size of client SASL token: 67
Krb5Context.unwrap: token=[60 41 06 09 2a 86 48 86 f7 12 01 02 02 02 01 11 00 ff ff ff ff 65 66 6b 58 fd f1 6b ec 27 53 22 23 5d 7b 03 33 0b e3 2d 7f d3 a9 13 62 01 01 00 00 73 61 73 70 61 64 40 4e 41 2e 53 41 53 2e 43 4f 4d 01 ]
Krb5Context.unwrap: data=[01 01 00 00 73 61 73 70 61 64 40 4e 41 2e 53 41 53 2e 43 4f 4d ]
12/06/09 16:40:47 INFO auth.SaslServerCallbackHandler: Successfully authenticated client: authenticationID=saspad@NA.SAS.COM;  authorizationID=saspad@NA.SAS.COM.
12/06/09 16:40:47 INFO auth.SaslServerCallbackHandler: Setting authorizedID: saspad
12/06/09 16:40:47 INFO server.ZooKeeperServer: adding SASL authorization for authorizationID: saspad
12/06/09 16:40:50 DEBUG server.FinalRequestProcessor: Processing request:: sessionid:0x137d2f4f3350005 type:ping cxid:0xfffffffffffffffe zxid:0xffffffffffff

It looks like my client identity, "saspad", flowed across the wire successfully.


Thanks again for taking a look at this.






-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com] 
Sent: Saturday, June 09, 2012 11:26 AM
To: user@hbase.apache.org
Cc: Tony Dean
Subject: Re: hbase client security (cluster is secure)

Hi again Tony,

Moving this to user@hbase.apache.org (bcc'd common-user@hadoop.apache.org). Please use the right user group lists for best responses. I've added you to CC in case you aren't subscribed to the HBase user lists.

Can you share the whole error/stacktrace-if-any/logs you get at the HMaster that says AccessControlException? Would be helpful to see what particular class/operation logged it to help you specifically.

I have an instance of 0.92-based cluster running after having followed http://hbase.apache.org/book.html#zookeeper and https://ccp.cloudera.com/display/CDH4DOC/HBase+Security+Configuration
and it seems to work well enough with auth enabled.

On Sat, Jun 9, 2012 at 3:41 AM, Tony Dean <To...@sas.com> wrote:
> Hi all,
>
> I have created a hadoop/hbase/zookeeper cluster that is secured and verified.  Now a simple test is to connect an hbase client (e.g, shell) to see its behavior.
>
> Well, I get the following message on the hbase master: AccessControlException: authentication is required.
>
> Looking at the code it appears that the client passed "simple" authentication byte in the rpc header.  Why, I don't know?
>
> My client configuration is as follows:
>
> hbase-site.xml:
>   <property>
>      <name>hbase.security.authentication</name>
>      <value>kerberos</value>
>   </property>
>
>   <property>
>      <name>hbase.rpc.engine</name>
>      <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
>   </property>
>
> hbase-env.sh:
> export HBASE_OPTS="$HBASE_OPTS -Djava.security.auth.login.config=/usr/local/hadoop/hbase/conf/hbase.jaas"
>
> hbase.jaas:
> Client {
>   com.sun.security.auth.module.Krb5LoginModule required
>   useKeyTab=false
>   useTicketCache=true
>  };
>
> I issue kinit for the client I want to use.  Then invoke hbase shell.  I simply issue list and see the error on the server.
>
> Any ideas what I am doing wrong?
>
> Thanks so much!
>
>
> _____________________________________________
> From: Tony Dean
> Sent: Tuesday, June 05, 2012 5:41 PM
> To: common-user@hadoop.apache.org
> Subject: hadoop file permission 1.0.3 (security)
>
>
> Can someone detail the options that are available to set file permissions at the hadoop and os level?  Here's what I have discovered thus far:
>
> dfs.permissions  = true|false (works as advertised) dfs.supergroup = 
> supergroup (works as advertised) dfs.umaskmode = umask (I believe this 
> should be used in lieu of dfs.umask) - it appears to set the permissions for files created in hadoop fs (minus execute permission).
> why was dffs.umask deprecated?  what's difference between the 2.
> dfs.datanode.data.dir.perm = perm (not sure this is working at all?) I thought it was supposed to set permission on blks at the os level.
>
> Are there any other file permission configuration properties?
>
> What I would really like to do is set data blk file permissions at the os level so that the blocks can be locked down from all users except super and supergroup, but allow it to be used accessed by hadoop API as specified by hdfs permissions.  Is this possible?
>
> Thanks.
>
>
> Tony Dean
> SAS Institute Inc.
> Senior Software Developer
> 919-531-6704
>
>  << OLE Object: Picture (Device Independent Bitmap) >>
>
>
>



--
Harsh J



Re: hbase client security (cluster is secure)

Posted by Harsh J <ha...@cloudera.com>.
Hi again Tony,

Moving this to user@hbase.apache.org (bcc'd
common-user@hadoop.apache.org). Please use the right user group lists
for best responses. I've added you to CC in case you aren't subscribed
to the HBase user lists.

Can you share the whole error/stacktrace-if-any/logs you get at the
HMaster that says AccessControlException? Would be helpful to see what
particular class/operation logged it to help you specifically.

I have an instance of 0.92-based cluster running after having followed
http://hbase.apache.org/book.html#zookeeper and
https://ccp.cloudera.com/display/CDH4DOC/HBase+Security+Configuration
and it seems to work well enough with auth enabled.

On Sat, Jun 9, 2012 at 3:41 AM, Tony Dean <To...@sas.com> wrote:
> Hi all,
>
> I have created a hadoop/hbase/zookeeper cluster that is secured and verified.  Now a simple test is to connect an hbase client (e.g, shell) to see its behavior.
>
> Well, I get the following message on the hbase master: AccessControlException: authentication is required.
>
> Looking at the code it appears that the client passed "simple" authentication byte in the rpc header.  Why, I don't know?
>
> My client configuration is as follows:
>
> hbase-site.xml:
>   <property>
>      <name>hbase.security.authentication</name>
>      <value>kerberos</value>
>   </property>
>
>   <property>
>      <name>hbase.rpc.engine</name>
>      <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
>   </property>
>
> hbase-env.sh:
> export HBASE_OPTS="$HBASE_OPTS -Djava.security.auth.login.config=/usr/local/hadoop/hbase/conf/hbase.jaas"
>
> hbase.jaas:
> Client {
>   com.sun.security.auth.module.Krb5LoginModule required
>   useKeyTab=false
>   useTicketCache=true
>  };
>
> I issue kinit for the client I want to use.  Then invoke hbase shell.  I simply issue list and see the error on the server.
>
> Any ideas what I am doing wrong?
>
> Thanks so much!
>
>
> _____________________________________________
> From: Tony Dean
> Sent: Tuesday, June 05, 2012 5:41 PM
> To: common-user@hadoop.apache.org
> Subject: hadoop file permission 1.0.3 (security)
>
>
> Can someone detail the options that are available to set file permissions at the hadoop and os level?  Here's what I have discovered thus far:
>
> dfs.permissions  = true|false (works as advertised)
> dfs.supergroup = supergroup (works as advertised)
> dfs.umaskmode = umask (I believe this should be used in lieu of dfs.umask) - it appears to set the permissions for files created in hadoop fs (minus execute permission).
> why was dffs.umask deprecated?  what's difference between the 2.
> dfs.datanode.data.dir.perm = perm (not sure this is working at all?) I thought it was supposed to set permission on blks at the os level.
>
> Are there any other file permission configuration properties?
>
> What I would really like to do is set data blk file permissions at the os level so that the blocks can be locked down from all users except super and supergroup, but allow it to be used accessed by hadoop API as specified by hdfs permissions.  Is this possible?
>
> Thanks.
>
>
> Tony Dean
> SAS Institute Inc.
> Senior Software Developer
> 919-531-6704
>
>  << OLE Object: Picture (Device Independent Bitmap) >>
>
>
>



-- 
Harsh J

Re: hbase client security (cluster is secure)

Posted by Harsh J <ha...@cloudera.com>.
Hi again Tony,

Moving this to user@hbase.apache.org (bcc'd
common-user@hadoop.apache.org). Please use the right user group lists
for best responses. I've added you to CC in case you aren't subscribed
to the HBase user lists.

Can you share the whole error/stacktrace-if-any/logs you get at the
HMaster that says AccessControlException? Would be helpful to see what
particular class/operation logged it to help you specifically.

I have an instance of 0.92-based cluster running after having followed
http://hbase.apache.org/book.html#zookeeper and
https://ccp.cloudera.com/display/CDH4DOC/HBase+Security+Configuration
and it seems to work well enough with auth enabled.

On Sat, Jun 9, 2012 at 3:41 AM, Tony Dean <To...@sas.com> wrote:
> Hi all,
>
> I have created a hadoop/hbase/zookeeper cluster that is secured and verified.  Now a simple test is to connect an hbase client (e.g, shell) to see its behavior.
>
> Well, I get the following message on the hbase master: AccessControlException: authentication is required.
>
> Looking at the code it appears that the client passed "simple" authentication byte in the rpc header.  Why, I don't know?
>
> My client configuration is as follows:
>
> hbase-site.xml:
>   <property>
>      <name>hbase.security.authentication</name>
>      <value>kerberos</value>
>   </property>
>
>   <property>
>      <name>hbase.rpc.engine</name>
>      <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
>   </property>
>
> hbase-env.sh:
> export HBASE_OPTS="$HBASE_OPTS -Djava.security.auth.login.config=/usr/local/hadoop/hbase/conf/hbase.jaas"
>
> hbase.jaas:
> Client {
>   com.sun.security.auth.module.Krb5LoginModule required
>   useKeyTab=false
>   useTicketCache=true
>  };
>
> I issue kinit for the client I want to use.  Then invoke hbase shell.  I simply issue list and see the error on the server.
>
> Any ideas what I am doing wrong?
>
> Thanks so much!
>
>
> _____________________________________________
> From: Tony Dean
> Sent: Tuesday, June 05, 2012 5:41 PM
> To: common-user@hadoop.apache.org
> Subject: hadoop file permission 1.0.3 (security)
>
>
> Can someone detail the options that are available to set file permissions at the hadoop and os level?  Here's what I have discovered thus far:
>
> dfs.permissions  = true|false (works as advertised)
> dfs.supergroup = supergroup (works as advertised)
> dfs.umaskmode = umask (I believe this should be used in lieu of dfs.umask) - it appears to set the permissions for files created in hadoop fs (minus execute permission).
> why was dffs.umask deprecated?  what's difference between the 2.
> dfs.datanode.data.dir.perm = perm (not sure this is working at all?) I thought it was supposed to set permission on blks at the os level.
>
> Are there any other file permission configuration properties?
>
> What I would really like to do is set data blk file permissions at the os level so that the blocks can be locked down from all users except super and supergroup, but allow it to be used accessed by hadoop API as specified by hdfs permissions.  Is this possible?
>
> Thanks.
>
>
> Tony Dean
> SAS Institute Inc.
> Senior Software Developer
> 919-531-6704
>
>  << OLE Object: Picture (Device Independent Bitmap) >>
>
>
>



-- 
Harsh J