You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Edward <eg...@hotmail.com> on 2017/12/07 20:49:53 UTC

Re: specify user name when connecting to hdfs

I have the same question. 
I am setting fs.hdfs.hadoopconf to the location of a Hadoop config. However,
when I start a job, I get an error message that it's trying to connect to
the HDFS directory as user "flink":

Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=flink, access=EXECUTE,
inode="/user/site":site:hadoop:drwx------
	at
org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:281)
	at
org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:262)
	at
org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:206)
	at
org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:158)
	at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
	at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3495)
	at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3478)
	at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:3465)
	at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:6596)
	at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4377)
	at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4355)

I have seen in other threads on this list where people mention setting up
the impersonate user in core-site.xml, but I've been unable to determine the
correct setting.




--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: specify user name when connecting to hdfs

Posted by Gordon Weakliem <gw...@sovrn.com>.
Seems like 3 possibilities:

1. Change the user flink runs as to the user with hdfs rights
2. hdfs chown the directory you're writing to (or hdfs chmod to open up
access)
3. I've seen where org.apache.hadoop.security.UserGroupInformation can be
used to do something like this:

            UserGroupInformation realUser =
UserGroupInformation.createRemoteUser("theuserwithhdfsrights");
            UserGroupInformation.setLoginUser(realUser);

On Thu, Dec 7, 2017 at 1:49 PM, Edward <eg...@hotmail.com> wrote:

> I have the same question.
> I am setting fs.hdfs.hadoopconf to the location of a Hadoop config.
> However,
> when I start a job, I get an error message that it's trying to connect to
> the HDFS directory as user "flink":
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.
> AccessControlException):
> Permission denied: user=flink, access=EXECUTE,
> inode="/user/site":site:hadoop:drwx------
>         at
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.
> checkFsPermission(DefaultAuthorizationProvider.java:281)
>         at
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(
> DefaultAuthorizationProvider.java:262)
>         at
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.
> checkTraverse(DefaultAuthorizationProvider.java:206)
>         at
> org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.
> checkPermission(DefaultAuthorizationProvider.java:158)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> checkPermission(FSPermissionChecker.java:152)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> checkPermission(FSDirectory.java:3495)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> checkPermission(FSDirectory.java:3478)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> checkTraverse(FSDirectory.java:3465)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> checkTraverse(FSNamesystem.java:6596)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> mkdirsInternal(FSNamesystem.java:4377)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.
> mkdirsInt(FSNamesystem.java:4355)
>
> I have seen in other threads on this list where people mention setting up
> the impersonate user in core-site.xml, but I've been unable to determine
> the
> correct setting.
>
>
>
>
> --
> Sent from: http://apache-flink-user-mailing-list-archive.2336050.
> n4.nabble.com/
>



-- 
[image: Img]
*  Gordon Weakliem*|  Sr. Software Engineer
  *O *303.493.5490
*  Boulder* | NYC | London    <https://twitter.com/sovrnholdings>
<https://www.facebook.com/sovrnholdings/>
<https://www.linkedin.com/company/3594890/>


CONFIDENTIALITY. This communication is intended only for the use of the
intended recipient(s) and contains information that is privileged and
confidential. As a recipient of this confidential and proprietary
information, you are prohibited from distributing this information outside
of sovrn. Further, if you are not the intended recipient, please note that
any dissemination of this communication is prohibited. If you have received
this communication in error, please erase all copies of the message,
including all attachments, and please also notify the sender immediately.
Thank you for your cooperation.