You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by 孙森 <se...@163.com> on 2019/04/01 07:49:40 UTC

flink ha hdfs目录权限问题

Hi all :
         我使用flink on yarn 模式启动flink,并且配置了高可用。当向flink cluster提交job时,会出现permission denied的异常。原因是HDFS:///flink/ha下创建的文件夹的权限都是755,没有写权限。所以每启动一个新的flink cluster的时候,就会生成一个新的目录 ,比如:/flink/ha/application_1553766783203_0026。需要修改/flink/ha/application_1553766783203_0026的权限才能成功提交job。请问这个问题应该怎么解决呢?

异常信息如下:
 The program finished with the following exception:

org.apache.flink.client.deployment.ClusterRetrieveException: Couldn't retrieve Yarn cluster
        at org.apache.flink.yarn.AbstractYarnClusterDescriptor.retrieve(AbstractYarnClusterDescriptor.java:409)
        at org.apache.flink.yarn.AbstractYarnClusterDescriptor.retrieve(AbstractYarnClusterDescriptor.java:111)
        at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:253)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
        at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
        at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
        at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/flink/ha/application_1553766783203_0026/blob":hdfs:hdfs:drwxr-xr-x
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:353)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:325)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:246)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1950)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1934)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1917)
        at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4181)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1109)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:645)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)




Best!

Sen

Re: flink ha hdfs目录权限问题

Posted by Yun Tang <my...@live.com>.
怀疑你的HDFS有配置了默认用户hdfs,使得创建目录时,总会以hdfs的用户进行创建。检查一下YARN页面上运行Flink application的用户名,是不是root。最简单的workaround的方式就是按照[1] 里面描述的,配置环境变量 HADOOP_USER_NAME 为 hdfs,这样你在用flink run命令行提交作业时以hdfs的用户名进行操作。

export HADOOP_USER_NAME=hdfs

[1] https://stackoverflow.com/questions/11371134/how-to-specify-username-when-putting-files-on-hdfs-from-a-remote-machine

________________________________
From: 孙森 <se...@163.com>
Sent: Monday, April 1, 2019 16:16
To: user-zh@flink.apache.org
Subject: Re: flink ha hdfs目录权限问题

修改目录权限对已有的文件是生效的,新生成的目录还是没有写权限。

[root@hdp1 ~]# hadoop fs -ls /flink/ha
Found 15 items
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/0e950900-c00e-4f24-a0bd-880ba9029a92
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/42e61028-e063-4257-864b-05f46e121a4e
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/58465b44-1d38-4f46-a450-edc06d2f625f
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/61b6a5b8-1e11-4ac1-99e4-c4dce842aa38
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/931291f3-717c-4ccb-a622-0207037267a8
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:14 /flink/ha/application_1553766783203_0026
drwxr-xr-x   - hdfs hdfs          0 2019-04-01 16:13 /flink/ha/application_1553766783203_0028
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/b2d16faa-ae2e-4130-81b8-56eddb9ef317
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/bef09af0-6462-4c88-8998-d18f922054a1
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/bf486c37-ab44-49a1-bb66-45be4817773d
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/c07351fb-b2d8-4aec-801c-27a983ca3f32
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/d779d3a2-3ec8-4998-ae9c-9d93ffb7f265
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:12 /flink/ha/dee74bc7-d450-4fb4-a9f2-4983d1f9949f
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/edd59fcf-8413-4ceb-92cf-8dcd637803f8
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/f6329551-56fb-4c52-a028-51fd838c4af6

> 在 2019年4月1日,下午4:02,Yun Tang <my...@live.com> 写道:
>
> Hi 孙森,
>
> 将提交用户root加到hadoop的hdfs用户组内,或者使用hadoop的hdfs用户提交程序[1],或者修改整个目录HDFS:///flink/ha的权限[2] 放开到任意用户应该可以解决问题,记得加上 -R ,保证对子目录都生效。
>
>
> [1] https://stackoverflow.com/questions/11371134/how-to-specify-username-when-putting-files-on-hdfs-from-a-remote-machine
> [2] https://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-common/FileSystemShell.html#chmod
>
> 祝好
> 唐云
>
> 发件人: 孙森
> 发送时间: 4月1日星期一 15:50
> 主题: flink ha hdfs目录权限问题
> 收件人: user-zh@flink.apache.org
>
>
> Hi all :
>         我使用flink on yarn 模式启动flink,并且配置了高可用。当向flink cluster提交job时,会出现permission denied的异常。原因是HDFS:///flink/ha下创建的文件夹的权限都是755,没有写权限。所以每启动一个新的flink cluster的时候,就会生成一个新的目录 ,比如:/flink/ha/application_1553766783203_0026。需要修改/flink/ha/application_1553766783203_0026的权限才能成功提交job。请问这个问题应该怎么解决呢?
>
> 异常信息如下:
> The program finished with the following exception:
>
> org.apache.flink.client.deployment.ClusterRetrieveException: Couldn't retrieve Yarn cluster
>        at org.apache.flink.yarn.AbstractYarnClusterDescriptor.retrieve(AbstractYarnClusterDescriptor.java:409)
>        at org.apache.flink.yarn.AbstractYarnClusterDescriptor.retrieve(AbstractYarnClusterDescriptor.java:111)
>        at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:253)
>        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>        at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
>        at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:422)
>        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>        at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/flink/ha/application_1553766783203_0026/blob":hdfs:hdfs:drwxr-xr-x
>        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:353)
>        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:325)
>        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:246)
>        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1950)
>        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1934)
>        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1917)
>        at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4181)
>        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1109)
>        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:645)
>        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:422)
>        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
>
>        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>
>
>
> Best!
>
> Sen
>
>


Re: flink ha hdfs目录权限问题

Posted by 孙森 <se...@163.com>.
修改目录权限对已有的文件是生效的,新生成的目录还是没有写权限。

[root@hdp1 ~]# hadoop fs -ls /flink/ha
Found 15 items
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/0e950900-c00e-4f24-a0bd-880ba9029a92
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/42e61028-e063-4257-864b-05f46e121a4e
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/58465b44-1d38-4f46-a450-edc06d2f625f
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/61b6a5b8-1e11-4ac1-99e4-c4dce842aa38
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/931291f3-717c-4ccb-a622-0207037267a8
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:14 /flink/ha/application_1553766783203_0026
drwxr-xr-x   - hdfs hdfs          0 2019-04-01 16:13 /flink/ha/application_1553766783203_0028
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/b2d16faa-ae2e-4130-81b8-56eddb9ef317
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/bef09af0-6462-4c88-8998-d18f922054a1
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/bf486c37-ab44-49a1-bb66-45be4817773d
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/c07351fb-b2d8-4aec-801c-27a983ca3f32
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/d779d3a2-3ec8-4998-ae9c-9d93ffb7f265
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:12 /flink/ha/dee74bc7-d450-4fb4-a9f2-4983d1f9949f
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/edd59fcf-8413-4ceb-92cf-8dcd637803f8
drwxrwxrwx   - hdfs hdfs          0 2019-04-01 15:13 /flink/ha/f6329551-56fb-4c52-a028-51fd838c4af6

> 在 2019年4月1日,下午4:02,Yun Tang <my...@live.com> 写道:
> 
> Hi 孙森,
> 
> 将提交用户root加到hadoop的hdfs用户组内,或者使用hadoop的hdfs用户提交程序[1],或者修改整个目录HDFS:///flink/ha的权限[2] 放开到任意用户应该可以解决问题,记得加上 -R ,保证对子目录都生效。
> 
> 
> [1] https://stackoverflow.com/questions/11371134/how-to-specify-username-when-putting-files-on-hdfs-from-a-remote-machine
> [2] https://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-common/FileSystemShell.html#chmod
> 
> 祝好
> 唐云
> 
> 发件人: 孙森
> 发送时间: 4月1日星期一 15:50
> 主题: flink ha hdfs目录权限问题
> 收件人: user-zh@flink.apache.org
> 
> 
> Hi all :
>         我使用flink on yarn 模式启动flink,并且配置了高可用。当向flink cluster提交job时,会出现permission denied的异常。原因是HDFS:///flink/ha下创建的文件夹的权限都是755,没有写权限。所以每启动一个新的flink cluster的时候,就会生成一个新的目录 ,比如:/flink/ha/application_1553766783203_0026。需要修改/flink/ha/application_1553766783203_0026的权限才能成功提交job。请问这个问题应该怎么解决呢?
> 
> 异常信息如下:
> The program finished with the following exception:
> 
> org.apache.flink.client.deployment.ClusterRetrieveException: Couldn't retrieve Yarn cluster
>        at org.apache.flink.yarn.AbstractYarnClusterDescriptor.retrieve(AbstractYarnClusterDescriptor.java:409)
>        at org.apache.flink.yarn.AbstractYarnClusterDescriptor.retrieve(AbstractYarnClusterDescriptor.java:111)
>        at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:253)
>        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>        at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
>        at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:422)
>        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>        at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/flink/ha/application_1553766783203_0026/blob":hdfs:hdfs:drwxr-xr-x
>        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:353)
>        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:325)
>        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:246)
>        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1950)
>        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1934)
>        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1917)
>        at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4181)
>        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1109)
>        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:645)
>        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:422)
>        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
> 
>        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> 
> 
> 
> 
> Best!
> 
> Sen
> 
> 


Re: flink ha hdfs目录权限问题

Posted by Yun Tang <my...@live.com>.
Hi 孙森,

将提交用户root加到hadoop的hdfs用户组内,或者使用hadoop的hdfs用户提交程序[1],或者修改整个目录HDFS:///flink/ha的权限[2] 放开到任意用户应该可以解决问题,记得加上 -R ,保证对子目录都生效。


[1] https://stackoverflow.com/questions/11371134/how-to-specify-username-when-putting-files-on-hdfs-from-a-remote-machine
[2] https://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-common/FileSystemShell.html#chmod

祝好
唐云

发件人: 孙森
发送时间: 4月1日星期一 15:50
主题: flink ha hdfs目录权限问题
收件人: user-zh@flink.apache.org


Hi all :
         我使用flink on yarn 模式启动flink,并且配置了高可用。当向flink cluster提交job时,会出现permission denied的异常。原因是HDFS:///flink/ha下创建的文件夹的权限都是755,没有写权限。所以每启动一个新的flink cluster的时候,就会生成一个新的目录 ,比如:/flink/ha/application_1553766783203_0026。需要修改/flink/ha/application_1553766783203_0026的权限才能成功提交job。请问这个问题应该怎么解决呢?

异常信息如下:
 The program finished with the following exception:

org.apache.flink.client.deployment.ClusterRetrieveException: Couldn't retrieve Yarn cluster
        at org.apache.flink.yarn.AbstractYarnClusterDescriptor.retrieve(AbstractYarnClusterDescriptor.java:409)
        at org.apache.flink.yarn.AbstractYarnClusterDescriptor.retrieve(AbstractYarnClusterDescriptor.java:111)
        at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:253)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
        at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
        at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
        at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/flink/ha/application_1553766783203_0026/blob":hdfs:hdfs:drwxr-xr-x
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:353)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:325)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:246)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1950)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1934)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1917)
        at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4181)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1109)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:645)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)




Best!

Sen