You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ranger.apache.org by Uma Maheswara Rao Gangumalla <um...@gmail.com> on 2020/10/26 17:45:10 UTC

Re: Review Request 72989: RANGER-3058 : [ranger-hive] create table fails when ViewDFS(client side HDFS mounting fs) mount points are targeting to Ozone/S3 FS

-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72989/
-----------------------------------------------------------

(Updated Oct. 26, 2020, 5:45 p.m.)


Review request for ranger and Ramesh Mani.


Repository: ranger


Description
-------

Currently RangerHiveAuthorizer has specific logic flows for HDFS and S3/Ozone.

If the fs scheme is part of hivePlugin#getFSScheme[1], then it will go and check privileges via fs.
[1] private static String RANGER_PLUGIN_HIVE_ULRAUTH_FILESYSTEM_SCHEMES_DEFAULT = "hdfs:,file:";

Flow will come to the following code peice:

if (!isURIAccessAllowed(user, permission, path, fs))
{ throw new HiveAccessControlException(String.format( "Permission denied: user [%s] does not have [%s] privilege on [%s]", user, permission.name(), path)); 
}
continue;
 

but, when we have paths mounted to other fs, like ozone, the current path will hdfs based path, but in reality that patch is ozone fs path, later this resolution happens inside mount fs. That time, when fs#access will be called to check permissions. Currently access API implemented only in HDFS. Once resolution happens, it will be delegated to OzoneFs. But OzoneFS does not implemented access API.
So, the default abstract FileSystem implementation is to just expect permissions matching to the expected mode.
Here the expected action mode for createTable is ALL. But Ozone/s3 paths will not have rwx permissions on keys. So, it will fail.

0: jdbc:hive2://umag-1.umag.root.xxx.site:218> CREATE EXTERNAL TABLE testtable1 (order_id BIGINT, user_id STRING, item STRING, state STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/test';
Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [systest] does not have [ALL] privilege on [hdfs://ns1/test] (state=42000,code=40000)
0: jdbc:hive2://umag-1.umag.root.xxx.site:218>

My mount point on hdfs configured as follows:
fs.viewfs.mounttable.ns1.link./test --> o3fs://bucket.volume.ozone1/test

hdfs://ns1/test will be resolved as o3fs://bucket.volume.ozone1/test.

So, checkPrevildges will fail

Caused by: org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAccessControlException: Permission denied: user [systest] does not have [ALL] privilege on [hdfs://ns1/test]
	at org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizer.checkPrivileges(RangerHiveAuthorizer.java:810) ~[?:?]
	at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizerV2.doAuthorization(CommandAuthorizerV2.java:77) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizer.doAuthorization(CommandAuthorizer.java:58) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Compiler.authorize(Compiler.java:406) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:109) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:188) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:600) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:546) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:540) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:127) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:199) ~[hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
	... 15 more
I will add more trace details in the comments.

For more details, please see the RANGER-3058 JIRA. (https://issues.apache.org/jira/browse/RANGER-3058)


Diffs
-----

  hive-agent/src/main/java/org/apache/ranger/authorization/hive/authorizer/RangerHiveAuthorizer.java 1bec50b37 


Diff: https://reviews.apache.org/r/72989/diff/1/


Testing
-------

Testing steps done as follows:
 I have created a cluster with ranger enabled.
 Copied the sample-sales.csv file to ozone /test folder.
 Created a mount point in hdfs://ns1/test to o3fs://bucket.volume.ozone1/test  ( before this step ozone bucket and volume created )
   add this in core-site.xml file fs.viewfs.mounttable.ns1.link./test = o3fs://bucket.volume.ozone1/test
 now create external table with the following query:
  CREATE EXTERNAL TABLE testtable1 (order_id BIGINT, user_id STRING, item STRING, state STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/test'

 It fails to create the table without this patch. It succeeded to create the table with this patch.

 Also verified the normal hdfs folder path table creation with this patch to ensure, regular hdfs paths not impacted. Yes, it succeeded to create table.


Thanks,

Uma Maheswara Rao Gangumalla


Re: Review Request 72989: RANGER-3058 : [ranger-hive] create table fails when ViewDFS(client side HDFS mounting fs) mount points are targeting to Ozone/S3 FS

Posted by Uma Maheswara Rao Gangumalla <um...@gmail.com>.

> On Nov. 3, 2020, 6:30 p.m., Ramesh Mani wrote:
> > hive-agent/src/main/java/org/apache/ranger/authorization/hive/authorizer/RangerHiveAuthorizer.java
> > Line 818 (original), 822 (patched)
> > <https://reviews.apache.org/r/72989/diff/1/?file=2241667#file2241667line822>
> >
> >     Since this is a new check done here, what is the permission that is expected  from the FileStatus as there should be a policy maintained in order for this to succeed? Could you please give details on it.

Thank you for the review!
At this stage, we don't know whether the path is really hdfs or non-hdfs path. We will be knowing that only when we create fs object and invoke resolvePath. If fs init failure happens we were throwing the same exception before also as this fs creation logic was there inside isURIAccessAllowed. If any IOE there we were returning false and on false we were throwing HACException. So, now I have just moved that fs init part here and thowing the same exception if init fails. Usually init failures can happen when NN down etc as it will try to create proxy to NN. 
Also to your question: we were getting expected perms as ALL here. For ozone mounted path ALL will noyt be satisfied as they will only have rw by default and there is no execute perm bit there. Ozone path can go through only URL permission checks.


- Uma Maheswara Rao


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72989/#review222165
-----------------------------------------------------------


On Oct. 26, 2020, 5:45 p.m., Uma Maheswara Rao Gangumalla wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/72989/
> -----------------------------------------------------------
> 
> (Updated Oct. 26, 2020, 5:45 p.m.)
> 
> 
> Review request for ranger and Ramesh Mani.
> 
> 
> Repository: ranger
> 
> 
> Description
> -------
> 
> Currently RangerHiveAuthorizer has specific logic flows for HDFS and S3/Ozone.
> 
> If the fs scheme is part of hivePlugin#getFSScheme[1], then it will go and check privileges via fs.
> [1] private static String RANGER_PLUGIN_HIVE_ULRAUTH_FILESYSTEM_SCHEMES_DEFAULT = "hdfs:,file:";
> 
> Flow will come to the following code peice:
> 
> if (!isURIAccessAllowed(user, permission, path, fs))
> { throw new HiveAccessControlException(String.format( "Permission denied: user [%s] does not have [%s] privilege on [%s]", user, permission.name(), path)); 
> }
> continue;
>  
> 
> but, when we have paths mounted to other fs, like ozone, the current path will hdfs based path, but in reality that patch is ozone fs path, later this resolution happens inside mount fs. That time, when fs#access will be called to check permissions. Currently access API implemented only in HDFS. Once resolution happens, it will be delegated to OzoneFs. But OzoneFS does not implemented access API.
> So, the default abstract FileSystem implementation is to just expect permissions matching to the expected mode.
> Here the expected action mode for createTable is ALL. But Ozone/s3 paths will not have rwx permissions on keys. So, it will fail.
> 
> 0: jdbc:hive2://umag-1.umag.root.xxx.site:218> CREATE EXTERNAL TABLE testtable1 (order_id BIGINT, user_id STRING, item STRING, state STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/test';
> Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [systest] does not have [ALL] privilege on [hdfs://ns1/test] (state=42000,code=40000)
> 0: jdbc:hive2://umag-1.umag.root.xxx.site:218>
> 
> My mount point on hdfs configured as follows:
> fs.viewfs.mounttable.ns1.link./test --> o3fs://bucket.volume.ozone1/test
> 
> hdfs://ns1/test will be resolved as o3fs://bucket.volume.ozone1/test.
> 
> So, checkPrevildges will fail
> 
> Caused by: org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAccessControlException: Permission denied: user [systest] does not have [ALL] privilege on [hdfs://ns1/test]
> 	at org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizer.checkPrivileges(RangerHiveAuthorizer.java:810) ~[?:?]
> 	at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizerV2.doAuthorization(CommandAuthorizerV2.java:77) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizer.doAuthorization(CommandAuthorizer.java:58) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Compiler.authorize(Compiler.java:406) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:109) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:188) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:600) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:546) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:540) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:127) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:199) ~[hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	... 15 more
> I will add more trace details in the comments.
> 
> For more details, please see the RANGER-3058 JIRA. (https://issues.apache.org/jira/browse/RANGER-3058)
> 
> 
> Diffs
> -----
> 
>   hive-agent/src/main/java/org/apache/ranger/authorization/hive/authorizer/RangerHiveAuthorizer.java 1bec50b37 
> 
> 
> Diff: https://reviews.apache.org/r/72989/diff/1/
> 
> 
> Testing
> -------
> 
> Testing steps done as follows:
>  I have created a cluster with ranger enabled.
>  Copied the sample-sales.csv file to ozone /test folder.
>  Created a mount point in hdfs://ns1/test to o3fs://bucket.volume.ozone1/test  ( before this step ozone bucket and volume created )
>    add this in core-site.xml file fs.viewfs.mounttable.ns1.link./test = o3fs://bucket.volume.ozone1/test
>  now create external table with the following query:
>   CREATE EXTERNAL TABLE testtable1 (order_id BIGINT, user_id STRING, item STRING, state STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/test'
> 
>  It fails to create the table without this patch. It succeeded to create the table with this patch.
> 
>  Also verified the normal hdfs folder path table creation with this patch to ensure, regular hdfs paths not impacted. Yes, it succeeded to create table.
> 
> 
> Thanks,
> 
> Uma Maheswara Rao Gangumalla
> 
>


Re: Review Request 72989: RANGER-3058 : [ranger-hive] create table fails when ViewDFS(client side HDFS mounting fs) mount points are targeting to Ozone/S3 FS

Posted by Ramesh Mani <rm...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72989/#review222165
-----------------------------------------------------------




hive-agent/src/main/java/org/apache/ranger/authorization/hive/authorizer/RangerHiveAuthorizer.java
Line 818 (original), 822 (patched)
<https://reviews.apache.org/r/72989/#comment311233>

    Since this is a new check done here, what is the permission that is expected  from the FileStatus as there should be a policy maintained in order for this to succeed? Could you please give details on it.


- Ramesh Mani


On Oct. 26, 2020, 5:45 p.m., Uma Maheswara Rao Gangumalla wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/72989/
> -----------------------------------------------------------
> 
> (Updated Oct. 26, 2020, 5:45 p.m.)
> 
> 
> Review request for ranger and Ramesh Mani.
> 
> 
> Repository: ranger
> 
> 
> Description
> -------
> 
> Currently RangerHiveAuthorizer has specific logic flows for HDFS and S3/Ozone.
> 
> If the fs scheme is part of hivePlugin#getFSScheme[1], then it will go and check privileges via fs.
> [1] private static String RANGER_PLUGIN_HIVE_ULRAUTH_FILESYSTEM_SCHEMES_DEFAULT = "hdfs:,file:";
> 
> Flow will come to the following code peice:
> 
> if (!isURIAccessAllowed(user, permission, path, fs))
> { throw new HiveAccessControlException(String.format( "Permission denied: user [%s] does not have [%s] privilege on [%s]", user, permission.name(), path)); 
> }
> continue;
>  
> 
> but, when we have paths mounted to other fs, like ozone, the current path will hdfs based path, but in reality that patch is ozone fs path, later this resolution happens inside mount fs. That time, when fs#access will be called to check permissions. Currently access API implemented only in HDFS. Once resolution happens, it will be delegated to OzoneFs. But OzoneFS does not implemented access API.
> So, the default abstract FileSystem implementation is to just expect permissions matching to the expected mode.
> Here the expected action mode for createTable is ALL. But Ozone/s3 paths will not have rwx permissions on keys. So, it will fail.
> 
> 0: jdbc:hive2://umag-1.umag.root.xxx.site:218> CREATE EXTERNAL TABLE testtable1 (order_id BIGINT, user_id STRING, item STRING, state STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/test';
> Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [systest] does not have [ALL] privilege on [hdfs://ns1/test] (state=42000,code=40000)
> 0: jdbc:hive2://umag-1.umag.root.xxx.site:218>
> 
> My mount point on hdfs configured as follows:
> fs.viewfs.mounttable.ns1.link./test --> o3fs://bucket.volume.ozone1/test
> 
> hdfs://ns1/test will be resolved as o3fs://bucket.volume.ozone1/test.
> 
> So, checkPrevildges will fail
> 
> Caused by: org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAccessControlException: Permission denied: user [systest] does not have [ALL] privilege on [hdfs://ns1/test]
> 	at org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizer.checkPrivileges(RangerHiveAuthorizer.java:810) ~[?:?]
> 	at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizerV2.doAuthorization(CommandAuthorizerV2.java:77) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizer.doAuthorization(CommandAuthorizer.java:58) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Compiler.authorize(Compiler.java:406) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:109) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:188) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:600) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:546) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:540) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:127) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:199) ~[hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	... 15 more
> I will add more trace details in the comments.
> 
> For more details, please see the RANGER-3058 JIRA. (https://issues.apache.org/jira/browse/RANGER-3058)
> 
> 
> Diffs
> -----
> 
>   hive-agent/src/main/java/org/apache/ranger/authorization/hive/authorizer/RangerHiveAuthorizer.java 1bec50b37 
> 
> 
> Diff: https://reviews.apache.org/r/72989/diff/1/
> 
> 
> Testing
> -------
> 
> Testing steps done as follows:
>  I have created a cluster with ranger enabled.
>  Copied the sample-sales.csv file to ozone /test folder.
>  Created a mount point in hdfs://ns1/test to o3fs://bucket.volume.ozone1/test  ( before this step ozone bucket and volume created )
>    add this in core-site.xml file fs.viewfs.mounttable.ns1.link./test = o3fs://bucket.volume.ozone1/test
>  now create external table with the following query:
>   CREATE EXTERNAL TABLE testtable1 (order_id BIGINT, user_id STRING, item STRING, state STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/test'
> 
>  It fails to create the table without this patch. It succeeded to create the table with this patch.
> 
>  Also verified the normal hdfs folder path table creation with this patch to ensure, regular hdfs paths not impacted. Yes, it succeeded to create table.
> 
> 
> Thanks,
> 
> Uma Maheswara Rao Gangumalla
> 
>


Re: Review Request 72989: RANGER-3058 : [ranger-hive] create table fails when ViewDFS(client side HDFS mounting fs) mount points are targeting to Ozone/S3 FS

Posted by Ramesh Mani <rm...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/72989/#review222178
-----------------------------------------------------------


Ship it!




Ship It!

- Ramesh Mani


On Oct. 26, 2020, 5:45 p.m., Uma Maheswara Rao Gangumalla wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/72989/
> -----------------------------------------------------------
> 
> (Updated Oct. 26, 2020, 5:45 p.m.)
> 
> 
> Review request for ranger and Ramesh Mani.
> 
> 
> Repository: ranger
> 
> 
> Description
> -------
> 
> Currently RangerHiveAuthorizer has specific logic flows for HDFS and S3/Ozone.
> 
> If the fs scheme is part of hivePlugin#getFSScheme[1], then it will go and check privileges via fs.
> [1] private static String RANGER_PLUGIN_HIVE_ULRAUTH_FILESYSTEM_SCHEMES_DEFAULT = "hdfs:,file:";
> 
> Flow will come to the following code peice:
> 
> if (!isURIAccessAllowed(user, permission, path, fs))
> { throw new HiveAccessControlException(String.format( "Permission denied: user [%s] does not have [%s] privilege on [%s]", user, permission.name(), path)); 
> }
> continue;
>  
> 
> but, when we have paths mounted to other fs, like ozone, the current path will hdfs based path, but in reality that patch is ozone fs path, later this resolution happens inside mount fs. That time, when fs#access will be called to check permissions. Currently access API implemented only in HDFS. Once resolution happens, it will be delegated to OzoneFs. But OzoneFS does not implemented access API.
> So, the default abstract FileSystem implementation is to just expect permissions matching to the expected mode.
> Here the expected action mode for createTable is ALL. But Ozone/s3 paths will not have rwx permissions on keys. So, it will fail.
> 
> 0: jdbc:hive2://umag-1.umag.root.xxx.site:218> CREATE EXTERNAL TABLE testtable1 (order_id BIGINT, user_id STRING, item STRING, state STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/test';
> Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [systest] does not have [ALL] privilege on [hdfs://ns1/test] (state=42000,code=40000)
> 0: jdbc:hive2://umag-1.umag.root.xxx.site:218>
> 
> My mount point on hdfs configured as follows:
> fs.viewfs.mounttable.ns1.link./test --> o3fs://bucket.volume.ozone1/test
> 
> hdfs://ns1/test will be resolved as o3fs://bucket.volume.ozone1/test.
> 
> So, checkPrevildges will fail
> 
> Caused by: org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAccessControlException: Permission denied: user [systest] does not have [ALL] privilege on [hdfs://ns1/test]
> 	at org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizer.checkPrivileges(RangerHiveAuthorizer.java:810) ~[?:?]
> 	at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizerV2.doAuthorization(CommandAuthorizerV2.java:77) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizer.doAuthorization(CommandAuthorizer.java:58) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Compiler.authorize(Compiler.java:406) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:109) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:188) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:600) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:546) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:540) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:127) ~[hive-exec-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:199) ~[hive-service-3.1.3000.7.2.3.0-128.jar:3.1.3000.7.2.3.0-128]
> 	... 15 more
> I will add more trace details in the comments.
> 
> For more details, please see the RANGER-3058 JIRA. (https://issues.apache.org/jira/browse/RANGER-3058)
> 
> 
> Diffs
> -----
> 
>   hive-agent/src/main/java/org/apache/ranger/authorization/hive/authorizer/RangerHiveAuthorizer.java 1bec50b37 
> 
> 
> Diff: https://reviews.apache.org/r/72989/diff/1/
> 
> 
> Testing
> -------
> 
> Testing steps done as follows:
>  I have created a cluster with ranger enabled.
>  Copied the sample-sales.csv file to ozone /test folder.
>  Created a mount point in hdfs://ns1/test to o3fs://bucket.volume.ozone1/test  ( before this step ozone bucket and volume created )
>    add this in core-site.xml file fs.viewfs.mounttable.ns1.link./test = o3fs://bucket.volume.ozone1/test
>  now create external table with the following query:
>   CREATE EXTERNAL TABLE testtable1 (order_id BIGINT, user_id STRING, item STRING, state STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE LOCATION '/test'
> 
>  It fails to create the table without this patch. It succeeded to create the table with this patch.
> 
>  Also verified the normal hdfs folder path table creation with this patch to ensure, regular hdfs paths not impacted. Yes, it succeeded to create table.
> 
> 
> Thanks,
> 
> Uma Maheswara Rao Gangumalla
> 
>