You are viewing a plain text version of this content. The canonical link for it is here.
Posted to yarn-issues@hadoop.apache.org by "Liu, David" <li...@gmail.com> on 2014/06/24 15:29:48 UTC

"SIMPLE authentication is not enabled" error for secured hdfs read

Hi experts,

After kinit hadoop, When I run this java file on a secured hadoop cluster, I met the following error:
14/06/24 16:53:41 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
14/06/24 16:53:41 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
14/06/24 16:53:41 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
14/06/24 16:53:41 ERROR security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "hdsh2-a161/10.62.66.161"; destination host is: "hdsh2-a161.lss.emc.com":8020; 
Exception in thread "main" java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "hdsh2-a161/10.62.66.161"; destination host is: "hdsh2-a161.lss.emc.com":8020; 
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
	at org.apache.hadoop.ipc.Client.call(Client.java:1351)
	at org.apache.hadoop.ipc.Client.call(Client.java:1300)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
	at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:191)
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1067)
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1057)
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1047)
	at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:235)
	at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:202)
	at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:195)
	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1215)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:290)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:286)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:286)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
	at Testhdfs$1.run(Testhdfs.java:43)
	at Testhdfs$1.run(Testhdfs.java:30)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
	at Testhdfs.main(Testhdfs.java:30)


Here is my code:

UserGroupInformation ugi = UserGroupInformation.createRemoteUser("hadoop");
		ugi.doAs(new PrivilegedExceptionAction<Void>() {
			public Void run() throws Exception {
				Configuration conf = new Configuration();
				FileSystem fs = FileSystem.get(URI.create(uri), conf);
				FSDataInputStream in = fs.open(new Path(uri));
				IOUtils.copy(in, System.out, 4096);
				return null;
			}
		});

But when I run it without UserGroupInformation, like this on the same cluster with the same user, the code works fine.
Configuration conf = new Configuration();
				FileSystem fs = FileSystem.get(URI.create(uri), conf);
				FSDataInputStream in = fs.open(new Path(uri));
				IOUtils.copy(in, System.out, 4096);

Could anyone help me?

Thanks

RE: Anyone know how to mock a secured hdfs for unit test?

Posted by "Zheng, Kai" <ka...@intel.com>.
Hi Chris,

Thanks for your great info. I would paste it in the JIRA for future reference if I or somebody else get the chance to work on it. 

Regards,
Kai

-----Original Message-----
From: Chris Nauroth [mailto:cnauroth@hortonworks.com] 
Sent: Saturday, June 28, 2014 4:27 AM
To: security@hadoop.apache.org
Cc: yarn-dev@hadoop.apache.org; hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org; yarn-issues@hadoop.apache.org; mapreduce-dev@hadoop.apache.org
Subject: Re: Anyone know how to mock a secured hdfs for unit test?

Hi David and Kai,

There are a couple of challenges with this, but I just figured out a pretty decent setup while working on HDFS-2856.  That code isn't committed yet, but if you open patch version 5 attached to that issue and look for the TestSaslDataTransfer class, then you'll see how it works.  Most of the logic for bootstrapping a MiniKDC and setting up the right HDFS configuration properties is in an abstract base class named SaslDataTransferTestCase.

I hope this helps.

There are a few other open issues out there related to tests in secure mode.  I know of HDFS-4312 and HDFS-5410.  It would be great to get more regular test coverage with something that more closely approximates a secured deployment.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Thu, Jun 26, 2014 at 7:27 AM, Zheng, Kai <ka...@intel.com> wrote:

> Hi David,
>
> Quite some time ago I opened HADOOP-9952 and planned to create secured 
> MiniClusters by making use of MiniKDC. Unfortunately since then I 
> didn't get the chance to work on it yet. If you need something like 
> that and would contribute, please let me know and see if anything I can help with. Thanks.
>
> Regards,
> Kai
>
> -----Original Message-----
> From: Liu, David [mailto:liujiong25@gmail.com]
> Sent: Thursday, June 26, 2014 10:12 PM
> To: hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org; 
> yarn-dev@hadoop.apache.org; yarn-issues@hadoop.apache.org; 
> mapreduce-dev@hadoop.apache.org; security@hadoop.apache.org
> Subject: Anyone know how to mock a secured hdfs for unit test?
>
> Hi all,
>
> I need to test my code which read data from secured hdfs, is there any 
> library to mock secured hdfs, can minihdfscluster do the work?
> Any suggestion is appreciated.
>
>
> Thanks
>

--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.

RE: Anyone know how to mock a secured hdfs for unit test?

Posted by "Zheng, Kai" <ka...@intel.com>.
Hi Chris,

Thanks for your great info. I would paste it in the JIRA for future reference if I or somebody else get the chance to work on it. 

Regards,
Kai

-----Original Message-----
From: Chris Nauroth [mailto:cnauroth@hortonworks.com] 
Sent: Saturday, June 28, 2014 4:27 AM
To: security@hadoop.apache.org
Cc: yarn-dev@hadoop.apache.org; hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org; yarn-issues@hadoop.apache.org; mapreduce-dev@hadoop.apache.org
Subject: Re: Anyone know how to mock a secured hdfs for unit test?

Hi David and Kai,

There are a couple of challenges with this, but I just figured out a pretty decent setup while working on HDFS-2856.  That code isn't committed yet, but if you open patch version 5 attached to that issue and look for the TestSaslDataTransfer class, then you'll see how it works.  Most of the logic for bootstrapping a MiniKDC and setting up the right HDFS configuration properties is in an abstract base class named SaslDataTransferTestCase.

I hope this helps.

There are a few other open issues out there related to tests in secure mode.  I know of HDFS-4312 and HDFS-5410.  It would be great to get more regular test coverage with something that more closely approximates a secured deployment.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Thu, Jun 26, 2014 at 7:27 AM, Zheng, Kai <ka...@intel.com> wrote:

> Hi David,
>
> Quite some time ago I opened HADOOP-9952 and planned to create secured 
> MiniClusters by making use of MiniKDC. Unfortunately since then I 
> didn't get the chance to work on it yet. If you need something like 
> that and would contribute, please let me know and see if anything I can help with. Thanks.
>
> Regards,
> Kai
>
> -----Original Message-----
> From: Liu, David [mailto:liujiong25@gmail.com]
> Sent: Thursday, June 26, 2014 10:12 PM
> To: hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org; 
> yarn-dev@hadoop.apache.org; yarn-issues@hadoop.apache.org; 
> mapreduce-dev@hadoop.apache.org; security@hadoop.apache.org
> Subject: Anyone know how to mock a secured hdfs for unit test?
>
> Hi all,
>
> I need to test my code which read data from secured hdfs, is there any 
> library to mock secured hdfs, can minihdfscluster do the work?
> Any suggestion is appreciated.
>
>
> Thanks
>

--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.

RE: Anyone know how to mock a secured hdfs for unit test?

Posted by "Zheng, Kai" <ka...@intel.com>.
Hi Chris,

Thanks for your great info. I would paste it in the JIRA for future reference if I or somebody else get the chance to work on it. 

Regards,
Kai

-----Original Message-----
From: Chris Nauroth [mailto:cnauroth@hortonworks.com] 
Sent: Saturday, June 28, 2014 4:27 AM
To: security@hadoop.apache.org
Cc: yarn-dev@hadoop.apache.org; hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org; yarn-issues@hadoop.apache.org; mapreduce-dev@hadoop.apache.org
Subject: Re: Anyone know how to mock a secured hdfs for unit test?

Hi David and Kai,

There are a couple of challenges with this, but I just figured out a pretty decent setup while working on HDFS-2856.  That code isn't committed yet, but if you open patch version 5 attached to that issue and look for the TestSaslDataTransfer class, then you'll see how it works.  Most of the logic for bootstrapping a MiniKDC and setting up the right HDFS configuration properties is in an abstract base class named SaslDataTransferTestCase.

I hope this helps.

There are a few other open issues out there related to tests in secure mode.  I know of HDFS-4312 and HDFS-5410.  It would be great to get more regular test coverage with something that more closely approximates a secured deployment.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Thu, Jun 26, 2014 at 7:27 AM, Zheng, Kai <ka...@intel.com> wrote:

> Hi David,
>
> Quite some time ago I opened HADOOP-9952 and planned to create secured 
> MiniClusters by making use of MiniKDC. Unfortunately since then I 
> didn't get the chance to work on it yet. If you need something like 
> that and would contribute, please let me know and see if anything I can help with. Thanks.
>
> Regards,
> Kai
>
> -----Original Message-----
> From: Liu, David [mailto:liujiong25@gmail.com]
> Sent: Thursday, June 26, 2014 10:12 PM
> To: hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org; 
> yarn-dev@hadoop.apache.org; yarn-issues@hadoop.apache.org; 
> mapreduce-dev@hadoop.apache.org; security@hadoop.apache.org
> Subject: Anyone know how to mock a secured hdfs for unit test?
>
> Hi all,
>
> I need to test my code which read data from secured hdfs, is there any 
> library to mock secured hdfs, can minihdfscluster do the work?
> Any suggestion is appreciated.
>
>
> Thanks
>

--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.

Re: Anyone know how to mock a secured hdfs for unit test?

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hi David and Kai,

There are a couple of challenges with this, but I just figured out a pretty
decent setup while working on HDFS-2856.  That code isn't committed yet,
but if you open patch version 5 attached to that issue and look for the
TestSaslDataTransfer class, then you'll see how it works.  Most of the
logic for bootstrapping a MiniKDC and setting up the right HDFS
configuration properties is in an abstract base class named
SaslDataTransferTestCase.

I hope this helps.

There are a few other open issues out there related to tests in secure
mode.  I know of HDFS-4312 and HDFS-5410.  It would be great to get more
regular test coverage with something that more closely approximates a
secured deployment.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Thu, Jun 26, 2014 at 7:27 AM, Zheng, Kai <ka...@intel.com> wrote:

> Hi David,
>
> Quite some time ago I opened HADOOP-9952 and planned to create secured
> MiniClusters by making use of MiniKDC. Unfortunately since then I didn't
> get the chance to work on it yet. If you need something like that and would
> contribute, please let me know and see if anything I can help with. Thanks.
>
> Regards,
> Kai
>
> -----Original Message-----
> From: Liu, David [mailto:liujiong25@gmail.com]
> Sent: Thursday, June 26, 2014 10:12 PM
> To: hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org;
> yarn-dev@hadoop.apache.org; yarn-issues@hadoop.apache.org;
> mapreduce-dev@hadoop.apache.org; security@hadoop.apache.org
> Subject: Anyone know how to mock a secured hdfs for unit test?
>
> Hi all,
>
> I need to test my code which read data from secured hdfs, is there any
> library to mock secured hdfs, can minihdfscluster do the work?
> Any suggestion is appreciated.
>
>
> Thanks
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Anyone know how to mock a secured hdfs for unit test?

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hi David and Kai,

There are a couple of challenges with this, but I just figured out a pretty
decent setup while working on HDFS-2856.  That code isn't committed yet,
but if you open patch version 5 attached to that issue and look for the
TestSaslDataTransfer class, then you'll see how it works.  Most of the
logic for bootstrapping a MiniKDC and setting up the right HDFS
configuration properties is in an abstract base class named
SaslDataTransferTestCase.

I hope this helps.

There are a few other open issues out there related to tests in secure
mode.  I know of HDFS-4312 and HDFS-5410.  It would be great to get more
regular test coverage with something that more closely approximates a
secured deployment.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Thu, Jun 26, 2014 at 7:27 AM, Zheng, Kai <ka...@intel.com> wrote:

> Hi David,
>
> Quite some time ago I opened HADOOP-9952 and planned to create secured
> MiniClusters by making use of MiniKDC. Unfortunately since then I didn't
> get the chance to work on it yet. If you need something like that and would
> contribute, please let me know and see if anything I can help with. Thanks.
>
> Regards,
> Kai
>
> -----Original Message-----
> From: Liu, David [mailto:liujiong25@gmail.com]
> Sent: Thursday, June 26, 2014 10:12 PM
> To: hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org;
> yarn-dev@hadoop.apache.org; yarn-issues@hadoop.apache.org;
> mapreduce-dev@hadoop.apache.org; security@hadoop.apache.org
> Subject: Anyone know how to mock a secured hdfs for unit test?
>
> Hi all,
>
> I need to test my code which read data from secured hdfs, is there any
> library to mock secured hdfs, can minihdfscluster do the work?
> Any suggestion is appreciated.
>
>
> Thanks
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Anyone know how to mock a secured hdfs for unit test?

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hi David and Kai,

There are a couple of challenges with this, but I just figured out a pretty
decent setup while working on HDFS-2856.  That code isn't committed yet,
but if you open patch version 5 attached to that issue and look for the
TestSaslDataTransfer class, then you'll see how it works.  Most of the
logic for bootstrapping a MiniKDC and setting up the right HDFS
configuration properties is in an abstract base class named
SaslDataTransferTestCase.

I hope this helps.

There are a few other open issues out there related to tests in secure
mode.  I know of HDFS-4312 and HDFS-5410.  It would be great to get more
regular test coverage with something that more closely approximates a
secured deployment.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Thu, Jun 26, 2014 at 7:27 AM, Zheng, Kai <ka...@intel.com> wrote:

> Hi David,
>
> Quite some time ago I opened HADOOP-9952 and planned to create secured
> MiniClusters by making use of MiniKDC. Unfortunately since then I didn't
> get the chance to work on it yet. If you need something like that and would
> contribute, please let me know and see if anything I can help with. Thanks.
>
> Regards,
> Kai
>
> -----Original Message-----
> From: Liu, David [mailto:liujiong25@gmail.com]
> Sent: Thursday, June 26, 2014 10:12 PM
> To: hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org;
> yarn-dev@hadoop.apache.org; yarn-issues@hadoop.apache.org;
> mapreduce-dev@hadoop.apache.org; security@hadoop.apache.org
> Subject: Anyone know how to mock a secured hdfs for unit test?
>
> Hi all,
>
> I need to test my code which read data from secured hdfs, is there any
> library to mock secured hdfs, can minihdfscluster do the work?
> Any suggestion is appreciated.
>
>
> Thanks
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Anyone know how to mock a secured hdfs for unit test?

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hi David and Kai,

There are a couple of challenges with this, but I just figured out a pretty
decent setup while working on HDFS-2856.  That code isn't committed yet,
but if you open patch version 5 attached to that issue and look for the
TestSaslDataTransfer class, then you'll see how it works.  Most of the
logic for bootstrapping a MiniKDC and setting up the right HDFS
configuration properties is in an abstract base class named
SaslDataTransferTestCase.

I hope this helps.

There are a few other open issues out there related to tests in secure
mode.  I know of HDFS-4312 and HDFS-5410.  It would be great to get more
regular test coverage with something that more closely approximates a
secured deployment.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Thu, Jun 26, 2014 at 7:27 AM, Zheng, Kai <ka...@intel.com> wrote:

> Hi David,
>
> Quite some time ago I opened HADOOP-9952 and planned to create secured
> MiniClusters by making use of MiniKDC. Unfortunately since then I didn't
> get the chance to work on it yet. If you need something like that and would
> contribute, please let me know and see if anything I can help with. Thanks.
>
> Regards,
> Kai
>
> -----Original Message-----
> From: Liu, David [mailto:liujiong25@gmail.com]
> Sent: Thursday, June 26, 2014 10:12 PM
> To: hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org;
> yarn-dev@hadoop.apache.org; yarn-issues@hadoop.apache.org;
> mapreduce-dev@hadoop.apache.org; security@hadoop.apache.org
> Subject: Anyone know how to mock a secured hdfs for unit test?
>
> Hi all,
>
> I need to test my code which read data from secured hdfs, is there any
> library to mock secured hdfs, can minihdfscluster do the work?
> Any suggestion is appreciated.
>
>
> Thanks
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Anyone know how to mock a secured hdfs for unit test?

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hi David and Kai,

There are a couple of challenges with this, but I just figured out a pretty
decent setup while working on HDFS-2856.  That code isn't committed yet,
but if you open patch version 5 attached to that issue and look for the
TestSaslDataTransfer class, then you'll see how it works.  Most of the
logic for bootstrapping a MiniKDC and setting up the right HDFS
configuration properties is in an abstract base class named
SaslDataTransferTestCase.

I hope this helps.

There are a few other open issues out there related to tests in secure
mode.  I know of HDFS-4312 and HDFS-5410.  It would be great to get more
regular test coverage with something that more closely approximates a
secured deployment.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Thu, Jun 26, 2014 at 7:27 AM, Zheng, Kai <ka...@intel.com> wrote:

> Hi David,
>
> Quite some time ago I opened HADOOP-9952 and planned to create secured
> MiniClusters by making use of MiniKDC. Unfortunately since then I didn't
> get the chance to work on it yet. If you need something like that and would
> contribute, please let me know and see if anything I can help with. Thanks.
>
> Regards,
> Kai
>
> -----Original Message-----
> From: Liu, David [mailto:liujiong25@gmail.com]
> Sent: Thursday, June 26, 2014 10:12 PM
> To: hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org;
> yarn-dev@hadoop.apache.org; yarn-issues@hadoop.apache.org;
> mapreduce-dev@hadoop.apache.org; security@hadoop.apache.org
> Subject: Anyone know how to mock a secured hdfs for unit test?
>
> Hi all,
>
> I need to test my code which read data from secured hdfs, is there any
> library to mock secured hdfs, can minihdfscluster do the work?
> Any suggestion is appreciated.
>
>
> Thanks
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

RE: Anyone know how to mock a secured hdfs for unit test?

Posted by "Zheng, Kai" <ka...@intel.com>.
Hi David,

Quite some time ago I opened HADOOP-9952 and planned to create secured MiniClusters by making use of MiniKDC. Unfortunately since then I didn't get the chance to work on it yet. If you need something like that and would contribute, please let me know and see if anything I can help with. Thanks.

Regards,
Kai

-----Original Message-----
From: Liu, David [mailto:liujiong25@gmail.com] 
Sent: Thursday, June 26, 2014 10:12 PM
To: hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org; yarn-dev@hadoop.apache.org; yarn-issues@hadoop.apache.org; mapreduce-dev@hadoop.apache.org; security@hadoop.apache.org
Subject: Anyone know how to mock a secured hdfs for unit test?

Hi all,

I need to test my code which read data from secured hdfs, is there any library to mock secured hdfs, can minihdfscluster do the work?
Any suggestion is appreciated.


Thanks

RE: Anyone know how to mock a secured hdfs for unit test?

Posted by "Zheng, Kai" <ka...@intel.com>.
Hi David,

Quite some time ago I opened HADOOP-9952 and planned to create secured MiniClusters by making use of MiniKDC. Unfortunately since then I didn't get the chance to work on it yet. If you need something like that and would contribute, please let me know and see if anything I can help with. Thanks.

Regards,
Kai

-----Original Message-----
From: Liu, David [mailto:liujiong25@gmail.com] 
Sent: Thursday, June 26, 2014 10:12 PM
To: hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org; yarn-dev@hadoop.apache.org; yarn-issues@hadoop.apache.org; mapreduce-dev@hadoop.apache.org; security@hadoop.apache.org
Subject: Anyone know how to mock a secured hdfs for unit test?

Hi all,

I need to test my code which read data from secured hdfs, is there any library to mock secured hdfs, can minihdfscluster do the work?
Any suggestion is appreciated.


Thanks

RE: Anyone know how to mock a secured hdfs for unit test?

Posted by "Zheng, Kai" <ka...@intel.com>.
Hi David,

Quite some time ago I opened HADOOP-9952 and planned to create secured MiniClusters by making use of MiniKDC. Unfortunately since then I didn't get the chance to work on it yet. If you need something like that and would contribute, please let me know and see if anything I can help with. Thanks.

Regards,
Kai

-----Original Message-----
From: Liu, David [mailto:liujiong25@gmail.com] 
Sent: Thursday, June 26, 2014 10:12 PM
To: hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org; yarn-dev@hadoop.apache.org; yarn-issues@hadoop.apache.org; mapreduce-dev@hadoop.apache.org; security@hadoop.apache.org
Subject: Anyone know how to mock a secured hdfs for unit test?

Hi all,

I need to test my code which read data from secured hdfs, is there any library to mock secured hdfs, can minihdfscluster do the work?
Any suggestion is appreciated.


Thanks

Re: Anyone know how to mock a secured hdfs for unit test?

Posted by Steve Loughran <st...@hortonworks.com>.
I'd recommending creating a Linux VM with kerberos installed; make that the
domain controller and work with it from your desktop.

This is the way to be strict -and to learn how Kerberos-managed clusters
can be used. Once you have lots of tests, a long-running VM is actually
faster than starting or stopping miniclusters per test suite, and also lets
you discover the joys of SPNEGO-authed web browsing and the like

see also
https://speakerdeck.com/stevel/secrets-of-yarn-application-development


On 26 June 2014 15:12, Liu, David <li...@gmail.com> wrote:

> Hi all,
>
> I need to test my code which read data from secured hdfs, is there any
> library to mock secured hdfs, can minihdfscluster do the work?
> Any suggestion is appreciated.
>
>
> Thanks

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

RE: Anyone know how to mock a secured hdfs for unit test?

Posted by "Zheng, Kai" <ka...@intel.com>.
Hi David,

Quite some time ago I opened HADOOP-9952 and planned to create secured MiniClusters by making use of MiniKDC. Unfortunately since then I didn't get the chance to work on it yet. If you need something like that and would contribute, please let me know and see if anything I can help with. Thanks.

Regards,
Kai

-----Original Message-----
From: Liu, David [mailto:liujiong25@gmail.com] 
Sent: Thursday, June 26, 2014 10:12 PM
To: hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org; yarn-dev@hadoop.apache.org; yarn-issues@hadoop.apache.org; mapreduce-dev@hadoop.apache.org; security@hadoop.apache.org
Subject: Anyone know how to mock a secured hdfs for unit test?

Hi all,

I need to test my code which read data from secured hdfs, is there any library to mock secured hdfs, can minihdfscluster do the work?
Any suggestion is appreciated.


Thanks

RE: Anyone know how to mock a secured hdfs for unit test?

Posted by "Zheng, Kai" <ka...@intel.com>.
Hi David,

Quite some time ago I opened HADOOP-9952 and planned to create secured MiniClusters by making use of MiniKDC. Unfortunately since then I didn't get the chance to work on it yet. If you need something like that and would contribute, please let me know and see if anything I can help with. Thanks.

Regards,
Kai

-----Original Message-----
From: Liu, David [mailto:liujiong25@gmail.com] 
Sent: Thursday, June 26, 2014 10:12 PM
To: hdfs-dev@hadoop.apache.org; hdfs-issues@hadoop.apache.org; yarn-dev@hadoop.apache.org; yarn-issues@hadoop.apache.org; mapreduce-dev@hadoop.apache.org; security@hadoop.apache.org
Subject: Anyone know how to mock a secured hdfs for unit test?

Hi all,

I need to test my code which read data from secured hdfs, is there any library to mock secured hdfs, can minihdfscluster do the work?
Any suggestion is appreciated.


Thanks

Anyone know how to mock a secured hdfs for unit test?

Posted by "Liu, David" <li...@gmail.com>.
Hi all,

I need to test my code which read data from secured hdfs, is there any library to mock secured hdfs, can minihdfscluster do the work?
Any suggestion is appreciated.


Thanks

Anyone know how to mock a secured hdfs for unit test?

Posted by "Liu, David" <li...@gmail.com>.
Hi all,

I need to test my code which read data from secured hdfs, is there any library to mock secured hdfs, can minihdfscluster do the work?
Any suggestion is appreciated.


Thanks

Anyone knows which class is JobHistoryServer use to talk to a secured hdfs?

Posted by "Liu, David" <li...@gmail.com>.
> Hi all,

I find JobHistoryServer have access to secured hdfs, can anyone paste some code or some class name it use for it to pass secure authentication? 


Thanks


Re: How do I use java api to read data from secured hadoop cluster?

Posted by Vinayakumar B <vi...@apache.org>.
If kinit done in the machine and user, where your client program is
running, then no other changes required to your program.
BTW in your client configurations also need to have the authentication
method configured.

Regards,
Vinay


On Wed, Jun 25, 2014 at 5:20 AM, Liu, David <li...@gmail.com> wrote:

> By the way, I have kinit hadoop on the cluster, and "hadoop fs -ls /"
> works on the cluster. The code below is run on the cluster too.
> On Jun 25, 2014, at 6:22 AM, "Liu, David" <li...@gmail.com> wrote:
>
> > I mean to  read data from secured hdfs.
> >
> >
> > On Jun 25, 2014, at 6:14 AM, "Liu, David" <li...@gmail.com> wrote:
> >
> >> Hi experts,
> >>
> >> Can anyone provide some example or api name to read data from secured
> hadoop cluster?
> >> I have code like this which can read data from unsecured cluster, but
> when it comes to secured one, authentication error will show.
> >>> Configuration conf = new Configuration();
> >>> FileSystem fs = FileSystem.get(URI.create(uri), conf);
> >>> FSDataInputStream in = fs.open(new Path(uri));
> >>> IOUtils.copy(in, System.out, 4096);
> >>>
> >>> Could anyone help me? Really Appreicate it.
> >>>
> >>> Thanks
> >>
> >
>
>


-- 
Regards,
Vinay

Anyone knows which class is JobHistoryServer use to talk to a secured hdfs?

Posted by "Liu, David" <li...@gmail.com>.
> Hi all,

I find JobHistoryServer have access to secured hdfs, can anyone paste some code or some class name it use for it to pass secure authentication? 


Thanks


Re: How do I use java api to read data from secured hadoop cluster?

Posted by "Liu, David" <li...@gmail.com>.
By the way, I have kinit hadoop on the cluster, and "hadoop fs -ls /" works on the cluster. The code below is run on the cluster too.
On Jun 25, 2014, at 6:22 AM, "Liu, David" <li...@gmail.com> wrote:

> I mean to  read data from secured hdfs.
> 
> 
> On Jun 25, 2014, at 6:14 AM, "Liu, David" <li...@gmail.com> wrote:
> 
>> Hi experts,
>> 
>> Can anyone provide some example or api name to read data from secured hadoop cluster?
>> I have code like this which can read data from unsecured cluster, but when it comes to secured one, authentication error will show.
>>> Configuration conf = new Configuration();
>>> FileSystem fs = FileSystem.get(URI.create(uri), conf);
>>> FSDataInputStream in = fs.open(new Path(uri));
>>> IOUtils.copy(in, System.out, 4096);
>>> 
>>> Could anyone help me? Really Appreicate it.
>>> 
>>> Thanks
>> 
> 


Re: How do I use java api to read data from secured hadoop cluster?

Posted by "Liu, David" <li...@gmail.com>.
By the way, I have kinit hadoop on the cluster, and "hadoop fs -ls /" works on the cluster. The code below is run on the cluster too.
On Jun 25, 2014, at 6:22 AM, "Liu, David" <li...@gmail.com> wrote:

> I mean to  read data from secured hdfs.
> 
> 
> On Jun 25, 2014, at 6:14 AM, "Liu, David" <li...@gmail.com> wrote:
> 
>> Hi experts,
>> 
>> Can anyone provide some example or api name to read data from secured hadoop cluster?
>> I have code like this which can read data from unsecured cluster, but when it comes to secured one, authentication error will show.
>>> Configuration conf = new Configuration();
>>> FileSystem fs = FileSystem.get(URI.create(uri), conf);
>>> FSDataInputStream in = fs.open(new Path(uri));
>>> IOUtils.copy(in, System.out, 4096);
>>> 
>>> Could anyone help me? Really Appreicate it.
>>> 
>>> Thanks
>> 
> 


Re: How do I use java api to read data from secured hadoop cluster?

Posted by "Liu, David" <li...@gmail.com>.
I mean to  read data from secured hdfs.


On Jun 25, 2014, at 6:14 AM, "Liu, David" <li...@gmail.com> wrote:

> Hi experts,
> 
> Can anyone provide some example or api name to read data from secured hadoop cluster?
> I have code like this which can read data from unsecured cluster, but when it comes to secured one, authentication error will show.
>> Configuration conf = new Configuration();
>> FileSystem fs = FileSystem.get(URI.create(uri), conf);
>> FSDataInputStream in = fs.open(new Path(uri));
>> IOUtils.copy(in, System.out, 4096);
>> 
>> Could anyone help me? Really Appreicate it.
>> 
>> Thanks
> 


Re: How do I use java api to read data from secured hadoop cluster?

Posted by "Liu, David" <li...@gmail.com>.
I mean to  read data from secured hdfs.


On Jun 25, 2014, at 6:14 AM, "Liu, David" <li...@gmail.com> wrote:

> Hi experts,
> 
> Can anyone provide some example or api name to read data from secured hadoop cluster?
> I have code like this which can read data from unsecured cluster, but when it comes to secured one, authentication error will show.
>> Configuration conf = new Configuration();
>> FileSystem fs = FileSystem.get(URI.create(uri), conf);
>> FSDataInputStream in = fs.open(new Path(uri));
>> IOUtils.copy(in, System.out, 4096);
>> 
>> Could anyone help me? Really Appreicate it.
>> 
>> Thanks
> 


How do I use java api to read data from secured hadoop cluster?

Posted by "Liu, David" <li...@gmail.com>.
Hi experts,

Can anyone provide some example or api name to read data from secured hadoop cluster?
I have code like this which can read data from unsecured cluster, but when it comes to secured one, authentication error will show.
> Configuration conf = new Configuration();
> FileSystem fs = FileSystem.get(URI.create(uri), conf);
> FSDataInputStream in = fs.open(new Path(uri));
> IOUtils.copy(in, System.out, 4096);
> 
> Could anyone help me? Really Appreicate it.
> 
> Thanks


Re: 答复: "SIMPLE authentication is not enabled" error for secured hdfs read

Posted by Vinayakumar B <vi...@apache.org>.
Sorry for the my previous reply suggesting to use
UserGroupInformation#createRemoteUser(String user, AuthMethod authMethod).
As Chris said, it will not add actual credentials to the UGI.

Hi david,
 now, if I understand you correctly, You want to access the secure cluster
using ugi.doAs(..) right?
     You have done..
         1) kinit for the *hadoop *user.
         2) Trying to create the UGI on client side by calling
UserGroupInformation#createRemoteUser()

   First of all, If you want to access data as same user as kinit, then you
dont need any UGI.
   Filesystem itself will be internally created with the current user's ugi
authenticated using the kerberos ticket cache.

   Still, If you want to get the UGI object on your own,
     then you can get by calling *UserGroupInformation#getCurrentUser() *at
the beginning of your program.
     this will get authenticated if you have the kinit done for the same
user using kerberos ticket cache for the user.

     Remember to call setConfiguration on *UserGroupInformation, *if you
have any custom configurations set programmatically, before
*getCurrentUser().*

   Hi YeQi,
      As per david's update second snippet of the code is working
fine. So  *hadoop.security.authentication
*must be configured in core-site.xml. Then there will not be any difference.

Regards,
Vinay


On Thu, Jun 26, 2014 at 12:54 PM, Yeqi <ye...@huawei.com> wrote:

> Hi Vinay/Chris/David
>
> I think this exception comes from client side didn't set "
> hadoop.security.authentication " to "kerberos" of the first snippet.
>
> In the second snippet, since the client user share the same OS user and
> env of the Hadoop superuser, so the access is passed.
>
> My suggestion is to use second snippet even in the different OS user, and
> add below line after conf obj is created.
> conf.set(" hadoop.security.authentication ", "kerberos")
>
> pls correct me if any idea.
> Thanks
> Ye Qi
>
>
> -----邮件原件-----
> 发件人: Chris Nauroth [mailto:cnauroth@hortonworks.com]
> 发送时间: 2014年6月26日 2:38
> 收件人: yarn-dev@hadoop.apache.org
> 主题: Re: "SIMPLE authentication is not enabled" error for secured hdfs read
>
> Just to clarify, the new method added in HADOOP-10683 helped fix erroneous
> logs that showed SIMPLE for the authentication method instead of KERBEROS.
>  Switching to that version of the method still won't automatically attach
> credentials to the ugi, so I expect you'll still get an authentication
> failure.
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Wed, Jun 25, 2014 at 3:14 AM, Vinayakumar B <vi...@apache.org>
> wrote:
>
> > Hi,
> >
> > In first snippet of the code following method can be used by providing
> > the corresponding AuthMethod.
> >
> > /**
> >    * Create a user from a login name. It is intended to be used for
> remote
> >    * users in RPC, since it won't have any credentials.
> >    * @param user the full user principal name, must not be empty or null
> >    * @return the UserGroupInformation for the remote user.
> >    */
> >   @InterfaceAudience.Public
> >   @InterfaceStability.Evolving
> >   public static UserGroupInformation createRemoteUser(String user,
> > AuthMethod authMethod) {
> >
> >
> > This has been added recently to trunk and branch-2. Its not yet
> > available in any release.
> >
> > This has been added as fix for HADOOP-10683
> >
> > Regards,
> > Vinay
> >
> >
> > On Wed, Jun 25, 2014 at 3:50 AM, Liu, David <li...@gmail.com>
> wrote:
> >
> > > Hi Nauroth,
> > >
> > > In this case, do you have any example on how to use java api to read
> > > data from secured hdfs?
> > >
> > > Thanks
> > >
> > >
> > >
> > >
> > > On Jun 25, 2014, at 2:24 AM, Chris Nauroth
> > > <cn...@hortonworks.com>
> > > wrote:
> > >
> > > > Hi David,
> > > >
> > > > UserGroupInformation.createRemoteUser does not attach credentials
> > > > to
> > the
> > > > returned ugi.  I expect the server side is rejecting the
> > > > connection due
> > > to
> > > > lack of credentials.  This is actually by design.  The
> > > > UserGroupInformation.createRemoteUser method is primarily intended
> > > > for
> > > use
> > > > on the server side when it wants to run a piece of its code while
> > > > impersonating the client.
> > > >
> > > > I'd say that your second code sample is the correct one.  After
> > > > running kinit to get credentials, you can just run your code.  I
> > > > expect
> > Kerberos
> > > > authentication to work without taking any special measures to call
> > > > UserGroupInformation directly from your code.
> > > >
> > > > Hope this helps.
> > > >
> > > > Chris Nauroth
> > > > Hortonworks
> > > > http://hortonworks.com/
> > > >
> > > >
> > > >
> > > > On Tue, Jun 24, 2014 at 6:29 AM, Liu, David <li...@gmail.com>
> > > wrote:
> > > >
> > > >> Hi experts,
> > > >>
> > > >> After kinit hadoop, When I run this java file on a secured hadoop
> > > cluster,
> > > >> I met the following error:
> > > >> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> > > >> PriviledgedActionException as:hdfs (auth:SIMPLE)
> > > >> cause:org.apache.hadoop.security.AccessControlException: Client
> > > >> cannot authenticate via:[TOKEN, KERBEROS]
> > > >> 14/06/24 16:53:41 WARN ipc.Client: Exception encountered while
> > > connecting
> > > >> to the server : org.apache.hadoop.security.AccessControlException:
> > > Client
> > > >> cannot authenticate via:[TOKEN, KERBEROS]
> > > >> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> > > >> PriviledgedActionException as:hdfs (auth:SIMPLE)
> > > cause:java.io.IOException:
> > > >> org.apache.hadoop.security.AccessControlException: Client cannot
> > > >> authenticate via:[TOKEN, KERBEROS]
> > > >> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> > > >> PriviledgedActionException as:hdfs (auth:SIMPLE)
> > > cause:java.io.IOException:
> > > >> Failed on local exception: java.io.IOException:
> > > >> org.apache.hadoop.security.AccessControlException: Client cannot
> > > >> authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
> > > >> "hdsh2-a161/10.62.66.161"; destination host is: "
> > hdsh2-a161.lss.emc.com
> > > >> ":8020;
> > > >> Exception in thread "main" java.io.IOException: Failed on local
> > > exception:
> > > >> java.io.IOException:
> > org.apache.hadoop.security.AccessControlException:
> > > >> Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details :
> > > >> local
> > > host
> > > >> is: "hdsh2-a161/10.62.66.161"; destination host is: "
> > > >> hdsh2-a161.lss.emc.com":8020;
> > > >>        at
> > > org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
> > > >>        at org.apache.hadoop.ipc.Client.call(Client.java:1351)
> > > >>        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
> > > >>        at
> > > >>
> > >
> > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngi
> > ne.java:206)
> > > >>        at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
> > > >>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > > >>        at
> > > >>
> > >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> > ava:57)
> > > >>        at
> > > >>
> > >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess
> > orImpl.java:43)
> > > >>        at java.lang.reflect.Method.invoke(Method.java:606)
> > > >>        at
> > > >>
> > >
> > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryIn
> > vocationHandler.java:186)
> > > >>        at
> > > >>
> > >
> > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocati
> > onHandler.java:102)
> > > >>        at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
> > > >>        at
> > > >>
> > >
> > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.g
> > etBlockLocations(ClientNamenodeProtocolTranslatorPB.java:191)
> > > >>        at
> > > >>
> > >
> > org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:
> > 1067)
> > > >>        at
> > > >>
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1057)
> > > >>        at
> > > >>
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1047)
> > > >>        at
> > > >>
> > >
> > org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBloc
> > kLength(DFSInputStream.java:235)
> > > >>        at
> > > >>
> > org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:202
> > )
> > > >>        at
> > > >>
> org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:195)
> > > >>        at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1215)
> > > >>        at
> > > >>
> > >
> > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileS
> > ystem.java:290)
> > > >>        at
> > > >>
> > >
> > org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileS
> > ystem.java:286)
> > > >>        at
> > > >>
> > >
> > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkReso
> > lver.java:81)
> > > >>        at
> > > >>
> > >
> > org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSyste
> > m.java:286)
> > > >>        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
> > > >>        at Testhdfs$1.run(Testhdfs.java:43)
> > > >>        at Testhdfs$1.run(Testhdfs.java:30)
> > > >>        at java.security.AccessController.doPrivileged(Native Method)
> > > >>        at javax.security.auth.Subject.doAs(Subject.java:415)
> > > >>        at
> > > >>
> > >
> > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat
> > ion.java:1491)
> > > >>        at Testhdfs.main(Testhdfs.java:30)
> > > >>
> > > >>
> > > >> Here is my code:
> > > >>
> > > >> UserGroupInformation ugi =
> > > UserGroupInformation.createRemoteUser("hadoop");
> > > >>                ugi.doAs(new PrivilegedExceptionAction<Void>() {
> > > >>                        public Void run() throws Exception {
> > > >>                                Configuration conf = new
> > Configuration();
> > > >>                                FileSystem fs =
> > > >> FileSystem.get(URI.create(uri), conf);
> > > >>                                FSDataInputStream in = fs.open(new
> > > >> Path(uri));
> > > >>                                IOUtils.copy(in, System.out, 4096);
> > > >>                                return null;
> > > >>                        }
> > > >>                });
> > > >>
> > > >> But when I run it without UserGroupInformation, like this on the
> > > >> same cluster with the same user, the code works fine.
> > > >> Configuration conf = new Configuration();
> > > >>                                FileSystem fs =
> > > >> FileSystem.get(URI.create(uri), conf);
> > > >>                                FSDataInputStream in = fs.open(new
> > > >> Path(uri));
> > > >>                                IOUtils.copy(in, System.out,
> > > >> 4096);
> > > >>
> > > >> Could anyone help me?
> > > >>
> > > >> Thanks
> > > >
> > > > --
> > > > CONFIDENTIALITY NOTICE
> > > > NOTICE: This message is intended for the use of the individual or
> > entity
> > > to
> > > > which it is addressed and may contain information that is
> > > > confidential, privileged and exempt from disclosure under
> > > > applicable law. If the
> > reader
> > > > of this message is not the intended recipient, you are hereby
> > > > notified
> > > that
> > > > any printing, copying, dissemination, distribution, disclosure or
> > > > forwarding of this communication is strictly prohibited. If you
> > > > have received this communication in error, please contact the
> > > > sender
> > > immediately
> > > > and delete it from your system. Thank You.
> > >
> > >
> >
> >
> > --
> > Regards,
> > Vinay
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

答复: "SIMPLE authentication is not enabled" error for secured hdfs read

Posted by Yeqi <ye...@huawei.com>.
Hi Vinay/Chris/David

I think this exception comes from client side didn't set " hadoop.security.authentication " to "kerberos" of the first snippet. 

In the second snippet, since the client user share the same OS user and env of the Hadoop superuser, so the access is passed.

My suggestion is to use second snippet even in the different OS user, and add below line after conf obj is created.
conf.set(" hadoop.security.authentication ", "kerberos")

pls correct me if any idea.
Thanks
Ye Qi


-----邮件原件-----
发件人: Chris Nauroth [mailto:cnauroth@hortonworks.com] 
发送时间: 2014年6月26日 2:38
收件人: yarn-dev@hadoop.apache.org
主题: Re: "SIMPLE authentication is not enabled" error for secured hdfs read

Just to clarify, the new method added in HADOOP-10683 helped fix erroneous logs that showed SIMPLE for the authentication method instead of KERBEROS.
 Switching to that version of the method still won't automatically attach credentials to the ugi, so I expect you'll still get an authentication failure.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Wed, Jun 25, 2014 at 3:14 AM, Vinayakumar B <vi...@apache.org>
wrote:

> Hi,
>
> In first snippet of the code following method can be used by providing 
> the corresponding AuthMethod.
>
> /**
>    * Create a user from a login name. It is intended to be used for remote
>    * users in RPC, since it won't have any credentials.
>    * @param user the full user principal name, must not be empty or null
>    * @return the UserGroupInformation for the remote user.
>    */
>   @InterfaceAudience.Public
>   @InterfaceStability.Evolving
>   public static UserGroupInformation createRemoteUser(String user, 
> AuthMethod authMethod) {
>
>
> This has been added recently to trunk and branch-2. Its not yet 
> available in any release.
>
> This has been added as fix for HADOOP-10683
>
> Regards,
> Vinay
>
>
> On Wed, Jun 25, 2014 at 3:50 AM, Liu, David <li...@gmail.com> wrote:
>
> > Hi Nauroth,
> >
> > In this case, do you have any example on how to use java api to read 
> > data from secured hdfs?
> >
> > Thanks
> >
> >
> >
> >
> > On Jun 25, 2014, at 2:24 AM, Chris Nauroth 
> > <cn...@hortonworks.com>
> > wrote:
> >
> > > Hi David,
> > >
> > > UserGroupInformation.createRemoteUser does not attach credentials 
> > > to
> the
> > > returned ugi.  I expect the server side is rejecting the 
> > > connection due
> > to
> > > lack of credentials.  This is actually by design.  The 
> > > UserGroupInformation.createRemoteUser method is primarily intended 
> > > for
> > use
> > > on the server side when it wants to run a piece of its code while 
> > > impersonating the client.
> > >
> > > I'd say that your second code sample is the correct one.  After 
> > > running kinit to get credentials, you can just run your code.  I 
> > > expect
> Kerberos
> > > authentication to work without taking any special measures to call 
> > > UserGroupInformation directly from your code.
> > >
> > > Hope this helps.
> > >
> > > Chris Nauroth
> > > Hortonworks
> > > http://hortonworks.com/
> > >
> > >
> > >
> > > On Tue, Jun 24, 2014 at 6:29 AM, Liu, David <li...@gmail.com>
> > wrote:
> > >
> > >> Hi experts,
> > >>
> > >> After kinit hadoop, When I run this java file on a secured hadoop
> > cluster,
> > >> I met the following error:
> > >> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> > >> PriviledgedActionException as:hdfs (auth:SIMPLE)
> > >> cause:org.apache.hadoop.security.AccessControlException: Client 
> > >> cannot authenticate via:[TOKEN, KERBEROS]
> > >> 14/06/24 16:53:41 WARN ipc.Client: Exception encountered while
> > connecting
> > >> to the server : org.apache.hadoop.security.AccessControlException:
> > Client
> > >> cannot authenticate via:[TOKEN, KERBEROS]
> > >> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> > >> PriviledgedActionException as:hdfs (auth:SIMPLE)
> > cause:java.io.IOException:
> > >> org.apache.hadoop.security.AccessControlException: Client cannot 
> > >> authenticate via:[TOKEN, KERBEROS]
> > >> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> > >> PriviledgedActionException as:hdfs (auth:SIMPLE)
> > cause:java.io.IOException:
> > >> Failed on local exception: java.io.IOException:
> > >> org.apache.hadoop.security.AccessControlException: Client cannot 
> > >> authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
> > >> "hdsh2-a161/10.62.66.161"; destination host is: "
> hdsh2-a161.lss.emc.com
> > >> ":8020;
> > >> Exception in thread "main" java.io.IOException: Failed on local
> > exception:
> > >> java.io.IOException:
> org.apache.hadoop.security.AccessControlException:
> > >> Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : 
> > >> local
> > host
> > >> is: "hdsh2-a161/10.62.66.161"; destination host is: "
> > >> hdsh2-a161.lss.emc.com":8020;
> > >>        at
> > org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
> > >>        at org.apache.hadoop.ipc.Client.call(Client.java:1351)
> > >>        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
> > >>        at
> > >>
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngi
> ne.java:206)
> > >>        at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
> > >>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >>        at
> > >>
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> ava:57)
> > >>        at
> > >>
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess
> orImpl.java:43)
> > >>        at java.lang.reflect.Method.invoke(Method.java:606)
> > >>        at
> > >>
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryIn
> vocationHandler.java:186)
> > >>        at
> > >>
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocati
> onHandler.java:102)
> > >>        at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
> > >>        at
> > >>
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.g
> etBlockLocations(ClientNamenodeProtocolTranslatorPB.java:191)
> > >>        at
> > >>
> >
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:
> 1067)
> > >>        at
> > >> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1057)
> > >>        at
> > >> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1047)
> > >>        at
> > >>
> >
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBloc
> kLength(DFSInputStream.java:235)
> > >>        at
> > >>
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:202
> )
> > >>        at
> > >> org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:195)
> > >>        at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1215)
> > >>        at
> > >>
> >
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileS
> ystem.java:290)
> > >>        at
> > >>
> >
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileS
> ystem.java:286)
> > >>        at
> > >>
> >
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkReso
> lver.java:81)
> > >>        at
> > >>
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSyste
> m.java:286)
> > >>        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
> > >>        at Testhdfs$1.run(Testhdfs.java:43)
> > >>        at Testhdfs$1.run(Testhdfs.java:30)
> > >>        at java.security.AccessController.doPrivileged(Native Method)
> > >>        at javax.security.auth.Subject.doAs(Subject.java:415)
> > >>        at
> > >>
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat
> ion.java:1491)
> > >>        at Testhdfs.main(Testhdfs.java:30)
> > >>
> > >>
> > >> Here is my code:
> > >>
> > >> UserGroupInformation ugi =
> > UserGroupInformation.createRemoteUser("hadoop");
> > >>                ugi.doAs(new PrivilegedExceptionAction<Void>() {
> > >>                        public Void run() throws Exception {
> > >>                                Configuration conf = new
> Configuration();
> > >>                                FileSystem fs = 
> > >> FileSystem.get(URI.create(uri), conf);
> > >>                                FSDataInputStream in = fs.open(new 
> > >> Path(uri));
> > >>                                IOUtils.copy(in, System.out, 4096);
> > >>                                return null;
> > >>                        }
> > >>                });
> > >>
> > >> But when I run it without UserGroupInformation, like this on the 
> > >> same cluster with the same user, the code works fine.
> > >> Configuration conf = new Configuration();
> > >>                                FileSystem fs = 
> > >> FileSystem.get(URI.create(uri), conf);
> > >>                                FSDataInputStream in = fs.open(new 
> > >> Path(uri));
> > >>                                IOUtils.copy(in, System.out, 
> > >> 4096);
> > >>
> > >> Could anyone help me?
> > >>
> > >> Thanks
> > >
> > > --
> > > CONFIDENTIALITY NOTICE
> > > NOTICE: This message is intended for the use of the individual or
> entity
> > to
> > > which it is addressed and may contain information that is 
> > > confidential, privileged and exempt from disclosure under 
> > > applicable law. If the
> reader
> > > of this message is not the intended recipient, you are hereby 
> > > notified
> > that
> > > any printing, copying, dissemination, distribution, disclosure or 
> > > forwarding of this communication is strictly prohibited. If you 
> > > have received this communication in error, please contact the 
> > > sender
> > immediately
> > > and delete it from your system. Thank You.
> >
> >
>
>
> --
> Regards,
> Vinay
>

--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.

Re: "SIMPLE authentication is not enabled" error for secured hdfs read

Posted by Chris Nauroth <cn...@hortonworks.com>.
Just to clarify, the new method added in HADOOP-10683 helped fix erroneous
logs that showed SIMPLE for the authentication method instead of KERBEROS.
 Switching to that version of the method still won't automatically attach
credentials to the ugi, so I expect you'll still get an authentication
failure.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Wed, Jun 25, 2014 at 3:14 AM, Vinayakumar B <vi...@apache.org>
wrote:

> Hi,
>
> In first snippet of the code following method can be used by providing the
> corresponding AuthMethod.
>
> /**
>    * Create a user from a login name. It is intended to be used for remote
>    * users in RPC, since it won't have any credentials.
>    * @param user the full user principal name, must not be empty or null
>    * @return the UserGroupInformation for the remote user.
>    */
>   @InterfaceAudience.Public
>   @InterfaceStability.Evolving
>   public static UserGroupInformation createRemoteUser(String user,
> AuthMethod authMethod) {
>
>
> This has been added recently to trunk and branch-2. Its not yet available
> in any release.
>
> This has been added as fix for HADOOP-10683
>
> Regards,
> Vinay
>
>
> On Wed, Jun 25, 2014 at 3:50 AM, Liu, David <li...@gmail.com> wrote:
>
> > Hi Nauroth,
> >
> > In this case, do you have any example on how to use java api to read data
> > from secured hdfs?
> >
> > Thanks
> >
> >
> >
> >
> > On Jun 25, 2014, at 2:24 AM, Chris Nauroth <cn...@hortonworks.com>
> > wrote:
> >
> > > Hi David,
> > >
> > > UserGroupInformation.createRemoteUser does not attach credentials to
> the
> > > returned ugi.  I expect the server side is rejecting the connection due
> > to
> > > lack of credentials.  This is actually by design.  The
> > > UserGroupInformation.createRemoteUser method is primarily intended for
> > use
> > > on the server side when it wants to run a piece of its code while
> > > impersonating the client.
> > >
> > > I'd say that your second code sample is the correct one.  After running
> > > kinit to get credentials, you can just run your code.  I expect
> Kerberos
> > > authentication to work without taking any special measures to call
> > > UserGroupInformation directly from your code.
> > >
> > > Hope this helps.
> > >
> > > Chris Nauroth
> > > Hortonworks
> > > http://hortonworks.com/
> > >
> > >
> > >
> > > On Tue, Jun 24, 2014 at 6:29 AM, Liu, David <li...@gmail.com>
> > wrote:
> > >
> > >> Hi experts,
> > >>
> > >> After kinit hadoop, When I run this java file on a secured hadoop
> > cluster,
> > >> I met the following error:
> > >> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> > >> PriviledgedActionException as:hdfs (auth:SIMPLE)
> > >> cause:org.apache.hadoop.security.AccessControlException: Client cannot
> > >> authenticate via:[TOKEN, KERBEROS]
> > >> 14/06/24 16:53:41 WARN ipc.Client: Exception encountered while
> > connecting
> > >> to the server : org.apache.hadoop.security.AccessControlException:
> > Client
> > >> cannot authenticate via:[TOKEN, KERBEROS]
> > >> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> > >> PriviledgedActionException as:hdfs (auth:SIMPLE)
> > cause:java.io.IOException:
> > >> org.apache.hadoop.security.AccessControlException: Client cannot
> > >> authenticate via:[TOKEN, KERBEROS]
> > >> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> > >> PriviledgedActionException as:hdfs (auth:SIMPLE)
> > cause:java.io.IOException:
> > >> Failed on local exception: java.io.IOException:
> > >> org.apache.hadoop.security.AccessControlException: Client cannot
> > >> authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
> > >> "hdsh2-a161/10.62.66.161"; destination host is: "
> hdsh2-a161.lss.emc.com
> > >> ":8020;
> > >> Exception in thread "main" java.io.IOException: Failed on local
> > exception:
> > >> java.io.IOException:
> org.apache.hadoop.security.AccessControlException:
> > >> Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local
> > host
> > >> is: "hdsh2-a161/10.62.66.161"; destination host is: "
> > >> hdsh2-a161.lss.emc.com":8020;
> > >>        at
> > org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
> > >>        at org.apache.hadoop.ipc.Client.call(Client.java:1351)
> > >>        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
> > >>        at
> > >>
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> > >>        at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
> > >>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >>        at
> > >>
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > >>        at
> > >>
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > >>        at java.lang.reflect.Method.invoke(Method.java:606)
> > >>        at
> > >>
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
> > >>        at
> > >>
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> > >>        at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
> > >>        at
> > >>
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:191)
> > >>        at
> > >>
> >
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1067)
> > >>        at
> > >> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1057)
> > >>        at
> > >> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1047)
> > >>        at
> > >>
> >
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:235)
> > >>        at
> > >>
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:202)
> > >>        at
> > >> org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:195)
> > >>        at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1215)
> > >>        at
> > >>
> >
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:290)
> > >>        at
> > >>
> >
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:286)
> > >>        at
> > >>
> >
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> > >>        at
> > >>
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:286)
> > >>        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
> > >>        at Testhdfs$1.run(Testhdfs.java:43)
> > >>        at Testhdfs$1.run(Testhdfs.java:30)
> > >>        at java.security.AccessController.doPrivileged(Native Method)
> > >>        at javax.security.auth.Subject.doAs(Subject.java:415)
> > >>        at
> > >>
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> > >>        at Testhdfs.main(Testhdfs.java:30)
> > >>
> > >>
> > >> Here is my code:
> > >>
> > >> UserGroupInformation ugi =
> > UserGroupInformation.createRemoteUser("hadoop");
> > >>                ugi.doAs(new PrivilegedExceptionAction<Void>() {
> > >>                        public Void run() throws Exception {
> > >>                                Configuration conf = new
> Configuration();
> > >>                                FileSystem fs =
> > >> FileSystem.get(URI.create(uri), conf);
> > >>                                FSDataInputStream in = fs.open(new
> > >> Path(uri));
> > >>                                IOUtils.copy(in, System.out, 4096);
> > >>                                return null;
> > >>                        }
> > >>                });
> > >>
> > >> But when I run it without UserGroupInformation, like this on the same
> > >> cluster with the same user, the code works fine.
> > >> Configuration conf = new Configuration();
> > >>                                FileSystem fs =
> > >> FileSystem.get(URI.create(uri), conf);
> > >>                                FSDataInputStream in = fs.open(new
> > >> Path(uri));
> > >>                                IOUtils.copy(in, System.out, 4096);
> > >>
> > >> Could anyone help me?
> > >>
> > >> Thanks
> > >
> > > --
> > > CONFIDENTIALITY NOTICE
> > > NOTICE: This message is intended for the use of the individual or
> entity
> > to
> > > which it is addressed and may contain information that is confidential,
> > > privileged and exempt from disclosure under applicable law. If the
> reader
> > > of this message is not the intended recipient, you are hereby notified
> > that
> > > any printing, copying, dissemination, distribution, disclosure or
> > > forwarding of this communication is strictly prohibited. If you have
> > > received this communication in error, please contact the sender
> > immediately
> > > and delete it from your system. Thank You.
> >
> >
>
>
> --
> Regards,
> Vinay
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: "SIMPLE authentication is not enabled" error for secured hdfs read

Posted by Vinayakumar B <vi...@apache.org>.
Hi,

In first snippet of the code following method can be used by providing the
corresponding AuthMethod.

/**
   * Create a user from a login name. It is intended to be used for remote
   * users in RPC, since it won't have any credentials.
   * @param user the full user principal name, must not be empty or null
   * @return the UserGroupInformation for the remote user.
   */
  @InterfaceAudience.Public
  @InterfaceStability.Evolving
  public static UserGroupInformation createRemoteUser(String user,
AuthMethod authMethod) {


This has been added recently to trunk and branch-2. Its not yet available
in any release.

This has been added as fix for HADOOP-10683

Regards,
Vinay


On Wed, Jun 25, 2014 at 3:50 AM, Liu, David <li...@gmail.com> wrote:

> Hi Nauroth,
>
> In this case, do you have any example on how to use java api to read data
> from secured hdfs?
>
> Thanks
>
>
>
>
> On Jun 25, 2014, at 2:24 AM, Chris Nauroth <cn...@hortonworks.com>
> wrote:
>
> > Hi David,
> >
> > UserGroupInformation.createRemoteUser does not attach credentials to the
> > returned ugi.  I expect the server side is rejecting the connection due
> to
> > lack of credentials.  This is actually by design.  The
> > UserGroupInformation.createRemoteUser method is primarily intended for
> use
> > on the server side when it wants to run a piece of its code while
> > impersonating the client.
> >
> > I'd say that your second code sample is the correct one.  After running
> > kinit to get credentials, you can just run your code.  I expect Kerberos
> > authentication to work without taking any special measures to call
> > UserGroupInformation directly from your code.
> >
> > Hope this helps.
> >
> > Chris Nauroth
> > Hortonworks
> > http://hortonworks.com/
> >
> >
> >
> > On Tue, Jun 24, 2014 at 6:29 AM, Liu, David <li...@gmail.com>
> wrote:
> >
> >> Hi experts,
> >>
> >> After kinit hadoop, When I run this java file on a secured hadoop
> cluster,
> >> I met the following error:
> >> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> >> PriviledgedActionException as:hdfs (auth:SIMPLE)
> >> cause:org.apache.hadoop.security.AccessControlException: Client cannot
> >> authenticate via:[TOKEN, KERBEROS]
> >> 14/06/24 16:53:41 WARN ipc.Client: Exception encountered while
> connecting
> >> to the server : org.apache.hadoop.security.AccessControlException:
> Client
> >> cannot authenticate via:[TOKEN, KERBEROS]
> >> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> >> PriviledgedActionException as:hdfs (auth:SIMPLE)
> cause:java.io.IOException:
> >> org.apache.hadoop.security.AccessControlException: Client cannot
> >> authenticate via:[TOKEN, KERBEROS]
> >> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> >> PriviledgedActionException as:hdfs (auth:SIMPLE)
> cause:java.io.IOException:
> >> Failed on local exception: java.io.IOException:
> >> org.apache.hadoop.security.AccessControlException: Client cannot
> >> authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
> >> "hdsh2-a161/10.62.66.161"; destination host is: "hdsh2-a161.lss.emc.com
> >> ":8020;
> >> Exception in thread "main" java.io.IOException: Failed on local
> exception:
> >> java.io.IOException: org.apache.hadoop.security.AccessControlException:
> >> Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local
> host
> >> is: "hdsh2-a161/10.62.66.161"; destination host is: "
> >> hdsh2-a161.lss.emc.com":8020;
> >>        at
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
> >>        at org.apache.hadoop.ipc.Client.call(Client.java:1351)
> >>        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
> >>        at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> >>        at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
> >>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>        at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >>        at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>        at java.lang.reflect.Method.invoke(Method.java:606)
> >>        at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
> >>        at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> >>        at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
> >>        at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:191)
> >>        at
> >>
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1067)
> >>        at
> >> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1057)
> >>        at
> >> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1047)
> >>        at
> >>
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:235)
> >>        at
> >> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:202)
> >>        at
> >> org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:195)
> >>        at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1215)
> >>        at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:290)
> >>        at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:286)
> >>        at
> >>
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> >>        at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:286)
> >>        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
> >>        at Testhdfs$1.run(Testhdfs.java:43)
> >>        at Testhdfs$1.run(Testhdfs.java:30)
> >>        at java.security.AccessController.doPrivileged(Native Method)
> >>        at javax.security.auth.Subject.doAs(Subject.java:415)
> >>        at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> >>        at Testhdfs.main(Testhdfs.java:30)
> >>
> >>
> >> Here is my code:
> >>
> >> UserGroupInformation ugi =
> UserGroupInformation.createRemoteUser("hadoop");
> >>                ugi.doAs(new PrivilegedExceptionAction<Void>() {
> >>                        public Void run() throws Exception {
> >>                                Configuration conf = new Configuration();
> >>                                FileSystem fs =
> >> FileSystem.get(URI.create(uri), conf);
> >>                                FSDataInputStream in = fs.open(new
> >> Path(uri));
> >>                                IOUtils.copy(in, System.out, 4096);
> >>                                return null;
> >>                        }
> >>                });
> >>
> >> But when I run it without UserGroupInformation, like this on the same
> >> cluster with the same user, the code works fine.
> >> Configuration conf = new Configuration();
> >>                                FileSystem fs =
> >> FileSystem.get(URI.create(uri), conf);
> >>                                FSDataInputStream in = fs.open(new
> >> Path(uri));
> >>                                IOUtils.copy(in, System.out, 4096);
> >>
> >> Could anyone help me?
> >>
> >> Thanks
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
>
>


-- 
Regards,
Vinay

Re: "SIMPLE authentication is not enabled" error for secured hdfs read

Posted by "Liu, David" <li...@gmail.com>.
Hi Nauroth,

In this case, do you have any example on how to use java api to read data from secured hdfs?

Thanks




On Jun 25, 2014, at 2:24 AM, Chris Nauroth <cn...@hortonworks.com> wrote:

> Hi David,
> 
> UserGroupInformation.createRemoteUser does not attach credentials to the
> returned ugi.  I expect the server side is rejecting the connection due to
> lack of credentials.  This is actually by design.  The
> UserGroupInformation.createRemoteUser method is primarily intended for use
> on the server side when it wants to run a piece of its code while
> impersonating the client.
> 
> I'd say that your second code sample is the correct one.  After running
> kinit to get credentials, you can just run your code.  I expect Kerberos
> authentication to work without taking any special measures to call
> UserGroupInformation directly from your code.
> 
> Hope this helps.
> 
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
> 
> 
> 
> On Tue, Jun 24, 2014 at 6:29 AM, Liu, David <li...@gmail.com> wrote:
> 
>> Hi experts,
>> 
>> After kinit hadoop, When I run this java file on a secured hadoop cluster,
>> I met the following error:
>> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:hdfs (auth:SIMPLE)
>> cause:org.apache.hadoop.security.AccessControlException: Client cannot
>> authenticate via:[TOKEN, KERBEROS]
>> 14/06/24 16:53:41 WARN ipc.Client: Exception encountered while connecting
>> to the server : org.apache.hadoop.security.AccessControlException: Client
>> cannot authenticate via:[TOKEN, KERBEROS]
>> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException:
>> org.apache.hadoop.security.AccessControlException: Client cannot
>> authenticate via:[TOKEN, KERBEROS]
>> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException:
>> Failed on local exception: java.io.IOException:
>> org.apache.hadoop.security.AccessControlException: Client cannot
>> authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
>> "hdsh2-a161/10.62.66.161"; destination host is: "hdsh2-a161.lss.emc.com
>> ":8020;
>> Exception in thread "main" java.io.IOException: Failed on local exception:
>> java.io.IOException: org.apache.hadoop.security.AccessControlException:
>> Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host
>> is: "hdsh2-a161/10.62.66.161"; destination host is: "
>> hdsh2-a161.lss.emc.com":8020;
>>        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
>>        at org.apache.hadoop.ipc.Client.call(Client.java:1351)
>>        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>>        at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>>        at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>        at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>        at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>        at java.lang.reflect.Method.invoke(Method.java:606)
>>        at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>>        at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>>        at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
>>        at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:191)
>>        at
>> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1067)
>>        at
>> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1057)
>>        at
>> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1047)
>>        at
>> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:235)
>>        at
>> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:202)
>>        at
>> org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:195)
>>        at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1215)
>>        at
>> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:290)
>>        at
>> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:286)
>>        at
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>        at
>> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:286)
>>        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
>>        at Testhdfs$1.run(Testhdfs.java:43)
>>        at Testhdfs$1.run(Testhdfs.java:30)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:415)
>>        at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>>        at Testhdfs.main(Testhdfs.java:30)
>> 
>> 
>> Here is my code:
>> 
>> UserGroupInformation ugi = UserGroupInformation.createRemoteUser("hadoop");
>>                ugi.doAs(new PrivilegedExceptionAction<Void>() {
>>                        public Void run() throws Exception {
>>                                Configuration conf = new Configuration();
>>                                FileSystem fs =
>> FileSystem.get(URI.create(uri), conf);
>>                                FSDataInputStream in = fs.open(new
>> Path(uri));
>>                                IOUtils.copy(in, System.out, 4096);
>>                                return null;
>>                        }
>>                });
>> 
>> But when I run it without UserGroupInformation, like this on the same
>> cluster with the same user, the code works fine.
>> Configuration conf = new Configuration();
>>                                FileSystem fs =
>> FileSystem.get(URI.create(uri), conf);
>>                                FSDataInputStream in = fs.open(new
>> Path(uri));
>>                                IOUtils.copy(in, System.out, 4096);
>> 
>> Could anyone help me?
>> 
>> Thanks
> 
> -- 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to 
> which it is addressed and may contain information that is confidential, 
> privileged and exempt from disclosure under applicable law. If the reader 
> of this message is not the intended recipient, you are hereby notified that 
> any printing, copying, dissemination, distribution, disclosure or 
> forwarding of this communication is strictly prohibited. If you have 
> received this communication in error, please contact the sender immediately 
> and delete it from your system. Thank You.


Re: "SIMPLE authentication is not enabled" error for secured hdfs read

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hi David,

UserGroupInformation.createRemoteUser does not attach credentials to the
returned ugi.  I expect the server side is rejecting the connection due to
lack of credentials.  This is actually by design.  The
UserGroupInformation.createRemoteUser method is primarily intended for use
on the server side when it wants to run a piece of its code while
impersonating the client.

I'd say that your second code sample is the correct one.  After running
kinit to get credentials, you can just run your code.  I expect Kerberos
authentication to work without taking any special measures to call
UserGroupInformation directly from your code.

Hope this helps.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 24, 2014 at 6:29 AM, Liu, David <li...@gmail.com> wrote:

> Hi experts,
>
> After kinit hadoop, When I run this java file on a secured hadoop cluster,
> I met the following error:
> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hdfs (auth:SIMPLE)
> cause:org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 14/06/24 16:53:41 WARN ipc.Client: Exception encountered while connecting
> to the server : org.apache.hadoop.security.AccessControlException: Client
> cannot authenticate via:[TOKEN, KERBEROS]
> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException:
> Failed on local exception: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
> "hdsh2-a161/10.62.66.161"; destination host is: "hdsh2-a161.lss.emc.com
> ":8020;
> Exception in thread "main" java.io.IOException: Failed on local exception:
> java.io.IOException: org.apache.hadoop.security.AccessControlException:
> Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host
> is: "hdsh2-a161/10.62.66.161"; destination host is: "
> hdsh2-a161.lss.emc.com":8020;
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1351)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>         at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:191)
>         at
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1067)
>         at
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1057)
>         at
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1047)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:235)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:202)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:195)
>         at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1215)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:290)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:286)
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:286)
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
>         at Testhdfs$1.run(Testhdfs.java:43)
>         at Testhdfs$1.run(Testhdfs.java:30)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>         at Testhdfs.main(Testhdfs.java:30)
>
>
> Here is my code:
>
> UserGroupInformation ugi = UserGroupInformation.createRemoteUser("hadoop");
>                 ugi.doAs(new PrivilegedExceptionAction<Void>() {
>                         public Void run() throws Exception {
>                                 Configuration conf = new Configuration();
>                                 FileSystem fs =
> FileSystem.get(URI.create(uri), conf);
>                                 FSDataInputStream in = fs.open(new
> Path(uri));
>                                 IOUtils.copy(in, System.out, 4096);
>                                 return null;
>                         }
>                 });
>
> But when I run it without UserGroupInformation, like this on the same
> cluster with the same user, the code works fine.
> Configuration conf = new Configuration();
>                                 FileSystem fs =
> FileSystem.get(URI.create(uri), conf);
>                                 FSDataInputStream in = fs.open(new
> Path(uri));
>                                 IOUtils.copy(in, System.out, 4096);
>
> Could anyone help me?
>
> Thanks

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: "SIMPLE authentication is not enabled" error for secured hdfs read

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hi David,

UserGroupInformation.createRemoteUser does not attach credentials to the
returned ugi.  I expect the server side is rejecting the connection due to
lack of credentials.  This is actually by design.  The
UserGroupInformation.createRemoteUser method is primarily intended for use
on the server side when it wants to run a piece of its code while
impersonating the client.

I'd say that your second code sample is the correct one.  After running
kinit to get credentials, you can just run your code.  I expect Kerberos
authentication to work without taking any special measures to call
UserGroupInformation directly from your code.

Hope this helps.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 24, 2014 at 6:29 AM, Liu, David <li...@gmail.com> wrote:

> Hi experts,
>
> After kinit hadoop, When I run this java file on a secured hadoop cluster,
> I met the following error:
> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hdfs (auth:SIMPLE)
> cause:org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 14/06/24 16:53:41 WARN ipc.Client: Exception encountered while connecting
> to the server : org.apache.hadoop.security.AccessControlException: Client
> cannot authenticate via:[TOKEN, KERBEROS]
> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException:
> Failed on local exception: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
> "hdsh2-a161/10.62.66.161"; destination host is: "hdsh2-a161.lss.emc.com
> ":8020;
> Exception in thread "main" java.io.IOException: Failed on local exception:
> java.io.IOException: org.apache.hadoop.security.AccessControlException:
> Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host
> is: "hdsh2-a161/10.62.66.161"; destination host is: "
> hdsh2-a161.lss.emc.com":8020;
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1351)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>         at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:191)
>         at
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1067)
>         at
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1057)
>         at
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1047)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:235)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:202)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:195)
>         at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1215)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:290)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:286)
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:286)
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
>         at Testhdfs$1.run(Testhdfs.java:43)
>         at Testhdfs$1.run(Testhdfs.java:30)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>         at Testhdfs.main(Testhdfs.java:30)
>
>
> Here is my code:
>
> UserGroupInformation ugi = UserGroupInformation.createRemoteUser("hadoop");
>                 ugi.doAs(new PrivilegedExceptionAction<Void>() {
>                         public Void run() throws Exception {
>                                 Configuration conf = new Configuration();
>                                 FileSystem fs =
> FileSystem.get(URI.create(uri), conf);
>                                 FSDataInputStream in = fs.open(new
> Path(uri));
>                                 IOUtils.copy(in, System.out, 4096);
>                                 return null;
>                         }
>                 });
>
> But when I run it without UserGroupInformation, like this on the same
> cluster with the same user, the code works fine.
> Configuration conf = new Configuration();
>                                 FileSystem fs =
> FileSystem.get(URI.create(uri), conf);
>                                 FSDataInputStream in = fs.open(new
> Path(uri));
>                                 IOUtils.copy(in, System.out, 4096);
>
> Could anyone help me?
>
> Thanks

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: "SIMPLE authentication is not enabled" error for secured hdfs read

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hi David,

UserGroupInformation.createRemoteUser does not attach credentials to the
returned ugi.  I expect the server side is rejecting the connection due to
lack of credentials.  This is actually by design.  The
UserGroupInformation.createRemoteUser method is primarily intended for use
on the server side when it wants to run a piece of its code while
impersonating the client.

I'd say that your second code sample is the correct one.  After running
kinit to get credentials, you can just run your code.  I expect Kerberos
authentication to work without taking any special measures to call
UserGroupInformation directly from your code.

Hope this helps.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 24, 2014 at 6:29 AM, Liu, David <li...@gmail.com> wrote:

> Hi experts,
>
> After kinit hadoop, When I run this java file on a secured hadoop cluster,
> I met the following error:
> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hdfs (auth:SIMPLE)
> cause:org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 14/06/24 16:53:41 WARN ipc.Client: Exception encountered while connecting
> to the server : org.apache.hadoop.security.AccessControlException: Client
> cannot authenticate via:[TOKEN, KERBEROS]
> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException:
> Failed on local exception: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
> "hdsh2-a161/10.62.66.161"; destination host is: "hdsh2-a161.lss.emc.com
> ":8020;
> Exception in thread "main" java.io.IOException: Failed on local exception:
> java.io.IOException: org.apache.hadoop.security.AccessControlException:
> Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host
> is: "hdsh2-a161/10.62.66.161"; destination host is: "
> hdsh2-a161.lss.emc.com":8020;
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1351)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>         at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:191)
>         at
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1067)
>         at
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1057)
>         at
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1047)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:235)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:202)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:195)
>         at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1215)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:290)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:286)
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:286)
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
>         at Testhdfs$1.run(Testhdfs.java:43)
>         at Testhdfs$1.run(Testhdfs.java:30)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>         at Testhdfs.main(Testhdfs.java:30)
>
>
> Here is my code:
>
> UserGroupInformation ugi = UserGroupInformation.createRemoteUser("hadoop");
>                 ugi.doAs(new PrivilegedExceptionAction<Void>() {
>                         public Void run() throws Exception {
>                                 Configuration conf = new Configuration();
>                                 FileSystem fs =
> FileSystem.get(URI.create(uri), conf);
>                                 FSDataInputStream in = fs.open(new
> Path(uri));
>                                 IOUtils.copy(in, System.out, 4096);
>                                 return null;
>                         }
>                 });
>
> But when I run it without UserGroupInformation, like this on the same
> cluster with the same user, the code works fine.
> Configuration conf = new Configuration();
>                                 FileSystem fs =
> FileSystem.get(URI.create(uri), conf);
>                                 FSDataInputStream in = fs.open(new
> Path(uri));
>                                 IOUtils.copy(in, System.out, 4096);
>
> Could anyone help me?
>
> Thanks

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: "SIMPLE authentication is not enabled" error for secured hdfs read

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hi David,

UserGroupInformation.createRemoteUser does not attach credentials to the
returned ugi.  I expect the server side is rejecting the connection due to
lack of credentials.  This is actually by design.  The
UserGroupInformation.createRemoteUser method is primarily intended for use
on the server side when it wants to run a piece of its code while
impersonating the client.

I'd say that your second code sample is the correct one.  After running
kinit to get credentials, you can just run your code.  I expect Kerberos
authentication to work without taking any special measures to call
UserGroupInformation directly from your code.

Hope this helps.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 24, 2014 at 6:29 AM, Liu, David <li...@gmail.com> wrote:

> Hi experts,
>
> After kinit hadoop, When I run this java file on a secured hadoop cluster,
> I met the following error:
> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hdfs (auth:SIMPLE)
> cause:org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 14/06/24 16:53:41 WARN ipc.Client: Exception encountered while connecting
> to the server : org.apache.hadoop.security.AccessControlException: Client
> cannot authenticate via:[TOKEN, KERBEROS]
> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException:
> Failed on local exception: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
> "hdsh2-a161/10.62.66.161"; destination host is: "hdsh2-a161.lss.emc.com
> ":8020;
> Exception in thread "main" java.io.IOException: Failed on local exception:
> java.io.IOException: org.apache.hadoop.security.AccessControlException:
> Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host
> is: "hdsh2-a161/10.62.66.161"; destination host is: "
> hdsh2-a161.lss.emc.com":8020;
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1351)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>         at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:191)
>         at
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1067)
>         at
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1057)
>         at
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1047)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:235)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:202)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:195)
>         at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1215)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:290)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:286)
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:286)
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
>         at Testhdfs$1.run(Testhdfs.java:43)
>         at Testhdfs$1.run(Testhdfs.java:30)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>         at Testhdfs.main(Testhdfs.java:30)
>
>
> Here is my code:
>
> UserGroupInformation ugi = UserGroupInformation.createRemoteUser("hadoop");
>                 ugi.doAs(new PrivilegedExceptionAction<Void>() {
>                         public Void run() throws Exception {
>                                 Configuration conf = new Configuration();
>                                 FileSystem fs =
> FileSystem.get(URI.create(uri), conf);
>                                 FSDataInputStream in = fs.open(new
> Path(uri));
>                                 IOUtils.copy(in, System.out, 4096);
>                                 return null;
>                         }
>                 });
>
> But when I run it without UserGroupInformation, like this on the same
> cluster with the same user, the code works fine.
> Configuration conf = new Configuration();
>                                 FileSystem fs =
> FileSystem.get(URI.create(uri), conf);
>                                 FSDataInputStream in = fs.open(new
> Path(uri));
>                                 IOUtils.copy(in, System.out, 4096);
>
> Could anyone help me?
>
> Thanks

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

How do I use java api to read data from secured hadoop cluster?

Posted by "Liu, David" <li...@gmail.com>.
Hi experts,

Can anyone provide some example or api name to read data from secured hadoop cluster?
I have code like this which can read data from unsecured cluster, but when it comes to secured one, authentication error will show.
> Configuration conf = new Configuration();
> FileSystem fs = FileSystem.get(URI.create(uri), conf);
> FSDataInputStream in = fs.open(new Path(uri));
> IOUtils.copy(in, System.out, 4096);
> 
> Could anyone help me? Really Appreicate it.
> 
> Thanks


Re: "SIMPLE authentication is not enabled" error for secured hdfs read

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hi David,

UserGroupInformation.createRemoteUser does not attach credentials to the
returned ugi.  I expect the server side is rejecting the connection due to
lack of credentials.  This is actually by design.  The
UserGroupInformation.createRemoteUser method is primarily intended for use
on the server side when it wants to run a piece of its code while
impersonating the client.

I'd say that your second code sample is the correct one.  After running
kinit to get credentials, you can just run your code.  I expect Kerberos
authentication to work without taking any special measures to call
UserGroupInformation directly from your code.

Hope this helps.

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 24, 2014 at 6:29 AM, Liu, David <li...@gmail.com> wrote:

> Hi experts,
>
> After kinit hadoop, When I run this java file on a secured hadoop cluster,
> I met the following error:
> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hdfs (auth:SIMPLE)
> cause:org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 14/06/24 16:53:41 WARN ipc.Client: Exception encountered while connecting
> to the server : org.apache.hadoop.security.AccessControlException: Client
> cannot authenticate via:[TOKEN, KERBEROS]
> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 14/06/24 16:53:41 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException:
> Failed on local exception: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
> "hdsh2-a161/10.62.66.161"; destination host is: "hdsh2-a161.lss.emc.com
> ":8020;
> Exception in thread "main" java.io.IOException: Failed on local exception:
> java.io.IOException: org.apache.hadoop.security.AccessControlException:
> Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host
> is: "hdsh2-a161/10.62.66.161"; destination host is: "
> hdsh2-a161.lss.emc.com":8020;
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1351)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>         at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:191)
>         at
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1067)
>         at
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1057)
>         at
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1047)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:235)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:202)
>         at
> org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:195)
>         at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1215)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:290)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:286)
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:286)
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:763)
>         at Testhdfs$1.run(Testhdfs.java:43)
>         at Testhdfs$1.run(Testhdfs.java:30)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>         at Testhdfs.main(Testhdfs.java:30)
>
>
> Here is my code:
>
> UserGroupInformation ugi = UserGroupInformation.createRemoteUser("hadoop");
>                 ugi.doAs(new PrivilegedExceptionAction<Void>() {
>                         public Void run() throws Exception {
>                                 Configuration conf = new Configuration();
>                                 FileSystem fs =
> FileSystem.get(URI.create(uri), conf);
>                                 FSDataInputStream in = fs.open(new
> Path(uri));
>                                 IOUtils.copy(in, System.out, 4096);
>                                 return null;
>                         }
>                 });
>
> But when I run it without UserGroupInformation, like this on the same
> cluster with the same user, the code works fine.
> Configuration conf = new Configuration();
>                                 FileSystem fs =
> FileSystem.get(URI.create(uri), conf);
>                                 FSDataInputStream in = fs.open(new
> Path(uri));
>                                 IOUtils.copy(in, System.out, 4096);
>
> Could anyone help me?
>
> Thanks

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.