You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ranger.apache.org by "Uma Maheswara Rao G (Jira)" <ji...@apache.org> on 2020/10/23 18:24:00 UTC

[jira] [Commented] (RANGER-3058) [ranger-hive] create table fails when ViewDFS( client side HDFS mounting fs) mount points are targeting to Ozone FS

    [ https://issues.apache.org/jira/browse/RANGER-3058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17219862#comment-17219862 ] 

Uma Maheswara Rao G commented on RANGER-3058:
---------------------------------------------

RangerHiveAuthorizer#checkPrivileges will call Hive FileUtils API to check FS access. 
 Here is the helpful trace to understand the issue:
{code:java}
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3483) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3457) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:571) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme$ChildFsGetter.getNewInstance(ViewFileSystemOverloadScheme.java:206) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.viewfs.ViewFileSystem$InnerCache.get(ViewFileSystem.java:141) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:325) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.viewfs.ViewFileSystem$1.getTargetFileSystem(ViewFileSystem.java:319) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:362) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.viewfs.InodeTree.<init>(InodeTree.java:618) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.viewfs.ViewFileSystem$1.<init>(ViewFileSystem.java:319) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:318) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme.initialize(ViewFileSystemOverloadScheme.java:161) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.hdfs.ViewDistributedFileSystem.tryInitializeMountingViewFs(ViewDistributedFileSystem.java:179) [hadoop-hdfs-client-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.hdfs.ViewDistributedFileSystem.initialize(ViewDistributedFileSystem.java:140) [hadoop-hdfs-client-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3423) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3483) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3451) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:518) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.hive.common.FileUtils$3.run(FileUtils.java:445) [hive-common-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-SNAPSHOT]
	at java.security.AccessController.doPrivileged(Native Method) [?:1.8.0_232]
	at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_232]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898) [hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.hive.common.FileUtils.checkFileAccessWithImpersonation(FileUtils.java:442) [hive-common-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-SNAPSHOT]
	at org.apache.hadoop.hive.common.FileUtils.isActionPermittedForFileHierarchy(FileUtils.java:502) [hive-common-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-SNAPSHOT]
	at org.apache.hadoop.hive.common.FileUtils.isActionPermittedForFileHierarchy(FileUtils.java:517) [hive-common-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-SNAPSHOT]
	at org.apache.hadoop.hive.common.FileUtils.isActionPermittedForFileHierarchy(FileUtils.java:483) [hive-common-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-SNAPSHOT]
	at org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizer.isURIAccessAllowed(RangerHiveAuthorizer.java:1998) [ranger-hive-plugin-2.1.0.7.2.3.0-128.jar:2.1.0.7.2.3.0-128]
	at org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizer.checkPrivileges(RangerHiveAuthorizer.java:809) [ranger-hive-plugin-2.1.0.7.2.3.0-128.jar:2.1.0.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizerV2.doAuthorization(CommandAuthorizerV2.java:77) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizer.doAuthorization(CommandAuthorizer.java:58) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Compiler.authorize(Compiler.java:406) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:109) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:188) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:600) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:546) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:540) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:127) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:199) [hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:260) [hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.cli.operation.Operation.run(Operation.java:274) [hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:565) [hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:551) [hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:315) [hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:567) [hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1557) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1542) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:654) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_232]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_232]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232]
{code}
Unfortunately FileUtils suppressed the real traces coming from FS. You would just see the following line in logs:
 Thread-6588]: Action ALL denied on hdfs://ns1/test/sample-sales.csv for user systest

It's because FileUtils is not logging the trace.
{code:java}
try {
      checkFileAccessWithImpersonation(fs, fileStatus, action, userName, subDirsToCheck);
    } catch (AccessControlException err) {
      // Action not permitted for user
      LOG.warn("Action " + action + " denied on " + fileStatus.getPath() + " for user " + userName);
      return false;
    }
{code}
Thats probably to reduce the too much verbosity, but its hard to understand with that trace. I tried to patch and print the trace:
 Once we print the above exception trace, we can where exactly it's failing.
{code:java}
2020-10-23 00:32:01,874 WARN  org.apache.hadoop.hive.common.FileUtils: [7957af5d-4550-456f-a983-8a4f43cffa05 HiveServer2-Handler-Pool: Thread-6588]: Action ALL denied on hdfs://ns1/test/sample-sales.csv for user systest
java.security.AccessControlException: Permission denied: user=systest, path="o3fs://bucket.volume.ozone1/test/sample-sales.csv":systest:systest:-rw-rw-rw-
	at org.apache.hadoop.hive.shims.Hadoop23Shims.wrapAccessException(Hadoop23Shims.java:953) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.shims.Hadoop23Shims.checkFileAccess(Hadoop23Shims.java:937) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.common.FileUtils$3.run(FileUtils.java:446) ~[hive-common-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-SNAPSHOT]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_232]
	at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_232]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898) ~[hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.hive.common.FileUtils.checkFileAccessWithImpersonation(FileUtils.java:442) ~[hive-common-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-SNAPSHOT]
	at org.apache.hadoop.hive.common.FileUtils.isActionPermittedForFileHierarchy(FileUtils.java:502) [hive-common-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-SNAPSHOT]
	at org.apache.hadoop.hive.common.FileUtils.isActionPermittedForFileHierarchy(FileUtils.java:517) [hive-common-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-SNAPSHOT]
	at org.apache.hadoop.hive.common.FileUtils.isActionPermittedForFileHierarchy(FileUtils.java:483) [hive-common-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-SNAPSHOT]
	at org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizer.isURIAccessAllowed(RangerHiveAuthorizer.java:1998) [ranger-hive-plugin-2.1.0.7.2.3.0-128.jar:2.1.0.7.2.3.0-128]
	at org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizer.checkPrivileges(RangerHiveAuthorizer.java:809) [ranger-hive-plugin-2.1.0.7.2.3.0-128.jar:2.1.0.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizerV2.doAuthorization(CommandAuthorizerV2.java:77) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizer.doAuthorization(CommandAuthorizer.java:58) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Compiler.authorize(Compiler.java:406) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:109) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:188) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:600) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:546) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:540) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:127) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:199) [hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:260) [hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.cli.operation.Operation.run(Operation.java:274) [hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:565) [hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:551) [hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:315) [hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:567) [hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1557) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1542) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:654) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) [hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_232]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_232]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232]
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_232]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_232]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_232]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_232]
	at org.apache.hadoop.hive.shims.Hadoop23Shims.checkFileAccess(Hadoop23Shims.java:934) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	... 35 more
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=systest, path="o3fs://bucket.volume.ozone1/test/sample-sales.csv":systest:systest:-rw-rw-rw-
	at org.apache.hadoop.fs.FileSystem.checkAccessPermissions(FileSystem.java:2725) ~[hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.FileSystem.access(FileSystem.java:2694) ~[hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.FilterFileSystem.access(FilterFileSystem.java:462) ~[hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.viewfs.ChRootedFileSystem.access(ChRootedFileSystem.java:256) ~[hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.fs.viewfs.ViewFileSystem.access(ViewFileSystem.java:557) ~[hadoop-common-3.1.1.7.2.3.0-128.jar:?]
	at org.apache.hadoop.hdfs.ViewDistributedFileSystem.access(ViewDistributedFileSystem.java:1756) ~[hadoop-hdfs-client-3.1.1.7.2.3.0-128.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_232]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_232]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_232]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_232]
	at org.apache.hadoop.hive.shims.Hadoop23Shims.checkFileAccess(Hadoop23Shims.java:934) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	... 35 more
{code}
It will fail because ozone file will have perms as follows:
 Found 1 items
 -rw-rw-rw- 3 systest systest 215 2020-10-23 07:02 /test/sample-sales.csv

Interestingly if you execute the with same path directly pointing to ozone path. The query will succeed.
{code:java}
0: jdbc:hive2://umag-1.umag.root.hwx.site:218> CREATE EXTERNAL TABLE testtable1 (order_id BIGINT, user_id STRING, item STRING, state STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION 'o3fs://bucket.volume.ozone1/test';
INFO  : Compiling command(queryId=hive_20201023181044_ea7a0f42-6dde-4900-8dc2-ffe3bcba448a): CREATE EXTERNAL TABLE testtable1 (order_id BIGINT, user_id STRING, item STRING, state STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION 'o3fs://bucket.volume.ozone1/test'
INFO  : Semantic Analysis Completed (retrial = false)
INFO  : Created Hive schema: Schema(fieldSchemas:null, properties:null)
INFO  : Completed compiling command(queryId=hive_20201023181044_ea7a0f42-6dde-4900-8dc2-ffe3bcba448a); Time taken: 0.449 seconds
INFO  : Executing command(queryId=hive_20201023181044_ea7a0f42-6dde-4900-8dc2-ffe3bcba448a): CREATE EXTERNAL TABLE testtable1 (order_id BIGINT, user_id STRING, item STRING, state STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION 'o3fs://bucket.volume.ozone1/test'
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=hive_20201023181044_ea7a0f42-6dde-4900-8dc2-ffe3bcba448a); Time taken: 0.782 seconds
INFO  : OK
No rows affected (1.547 seconds)
0: jdbc:hive2://umag-1.umag.root.hwx.site:218>
{code}
This is clearly telling us the authorizer flows are different. Yes, if the given scheme is not part of hivePlugin FS Schemes ( currently hdfs://, [file://|file:///]), it will not go through fs API to check permissions. But in the mount fs case, the visible path always hdfs, so even though underneath path os ozone fs, it will go as HDFS flow in RangerHiveAuthorizer.

So, to handle this case we should figure out what the actual underneath path and then check the scheme. If the underneath path is still hdfs ( hdfs mounts can be mounted another hdfs cluster), then the current flow should be fine. Otherwise we should let the flow follow as other fs(ozone/s3)

I tried to fix this using resolvePath API and it worked with that patch. 
 The idea here is: 
 First we will check whether the fs has any mount points.
 if yes, then we will resolvePath and check the scheme. if the scheme not part of hivePliginFSSchemes(hdfs://, file://), then we will go as s3/ozone fs flow.
 if the scheme is part of hivePliginFSSchemes, then we will go and check isURIAccessAllowed.

I will post a patch shortly with this changes.

> [ranger-hive] create table fails when ViewDFS( client side HDFS mounting fs) mount points are targeting to Ozone FS 
> --------------------------------------------------------------------------------------------------------------------
>
>                 Key: RANGER-3058
>                 URL: https://issues.apache.org/jira/browse/RANGER-3058
>             Project: Ranger
>          Issue Type: Bug
>          Components: plugins, Ranger
>            Reporter: Uma Maheswara Rao G
>            Priority: Major
>
> Currently RangerHiveAuthorizer has specific logic flows for HDFS and S3/Ozone.
> If the fs scheme is part of hivePlugin#getFSScheme[1], then it will go and check privileges via fs.  
> [1]	private static String RANGER_PLUGIN_HIVE_ULRAUTH_FILESYSTEM_SCHEMES_DEFAULT = "hdfs:,file:";
> Flow will come to the following code peice:
> if (!isURIAccessAllowed(user, permission, path, fs)) {
> 								throw new HiveAccessControlException(String.format(
> 										"Permission denied: user [%s] does not have [%s] privilege on [%s]",
> 										user, permission.name(), path));
> 							}
> 							continue;
> but, when we have paths mounted to other fs, like ozone, the current path will hdfs based path, but in reality that patch is ozone fs path, later this resolution happens inside mount fs. That time, when fs#access will be called to check permissions. Currently access API implemented only in HDFS. Once resolution happens, it will be delegated to OzoneFs. But OzoneFS does not implemented access API.
> So, the default abstract FileSystem implementation is to just expect permissions matching to the expected mode.
> Here the expected action mode for createTable is ALL. But Ozone/s3 paths will not have rwx permissions on keys. So, it will fail.
> 0: jdbc:hive2://umag-1.umag.root.xxx.site:218> CREATE EXTERNAL TABLE testtable1 (order_id BIGINT, user_id STRING, item STRING, state STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/test';
> Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [systest] does not have [ALL] privilege on [hdfs://ns1/test] (state=42000,code=40000)
> 0: jdbc:hive2://umag-1.umag.root.xxx.site:218>
> My mount point on hdfs configured as follows:
> fs.viewfs.mounttable.ns1.link./test --> o3fs://bucket.volume.ozone1/test
> hdfs://ns1/test will be resolved as o3fs://bucket.volume.ozone1/test. 
> So, checkPrevildges will fail
> {code:java}
> Caused by: org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAccessControlException: Permission denied: user [systest] does not have [ALL] privilege on [hdfs://ns1/test]
> 	at org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizer.checkPrivileges(RangerHiveAuthorizer.java:810) ~[?:?]
> 	at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizerV2.doAuthorization(CommandAuthorizerV2.java:77) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizer.doAuthorization(CommandAuthorizer.java:58) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Compiler.authorize(Compiler.java:406) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:109) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:188) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:600) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:546) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:540) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:127) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:199) ~[hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	... 15 more
> {code}
> I will add more trace details in the comments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)