You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Thejas M Nair (JIRA)" <ji...@apache.org> on 2017/12/15 22:33:00 UTC

[jira] [Commented] (HIVE-18287) Scratch dir permission check doesn't honor Ranger based privileges

    [ https://issues.apache.org/jira/browse/HIVE-18287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293372#comment-16293372 ] 

Thejas M Nair commented on HIVE-18287:
--------------------------------------

It needs to use FileSystem.access methods to verify permission, so that effective permission that includes Ranger permissions is used.

This might be an issue with Sentry based permission as well cc [~akolb]].



> Scratch dir permission check doesn't honor Ranger based privileges
> ------------------------------------------------------------------
>
>                 Key: HIVE-18287
>                 URL: https://issues.apache.org/jira/browse/HIVE-18287
>             Project: Hive
>          Issue Type: Bug
>          Components: HiveServer2, Security
>    Affects Versions: 1.0.0, 2.4.0
>            Reporter: Kunal Rajguru
>
> Hiveserver2 needs permission 733 or above on scratch directory to start successfully.
> HS2 does not take into consideration the permission given to scratch dir via Ranger, it expects the permissions at HDFS level.
> Even if we give full access to 'hive' user from Ranger , the start of HS2 fails, it expects to have the permission from HDFS (#hdfs dfs -chmod 755 /tmp/hive)
> >> SessionState.java
> {code:java}
> private Path createRootHDFSDir(HiveConf conf) throws IOException { 
> Path rootHDFSDirPath = new Path(HiveConf.getVar(conf, HiveConf.ConfVars.SCRATCHDIR)); 
> FsPermission writableHDFSDirPermission = new FsPermission((short)00733); 
> FileSystem fs = rootHDFSDirPath.getFileSystem(conf); 
> if (!fs.exists(rootHDFSDirPath)) { 
> Utilities.createDirsWithPermission(conf, rootHDFSDirPath, writableHDFSDirPermission, true); 
> } 
> FsPermission currentHDFSDirPermission = fs.getFileStatus(rootHDFSDirPath).getPermission(); 
> if (rootHDFSDirPath != null && rootHDFSDirPath.toUri() != null) { 
> String schema = rootHDFSDirPath.toUri().getScheme(); 
> LOG.debug( 
> "HDFS root scratch dir: " + rootHDFSDirPath + " with schema " + schema + ", permission: " + 
> currentHDFSDirPermission); 
> } else { 
> LOG.debug( 
> "HDFS root scratch dir: " + rootHDFSDirPath + ", permission: " + currentHDFSDirPermission); 
> } 
> // If the root HDFS scratch dir already exists, make sure it is writeable. 
> if (!((currentHDFSDirPermission.toShort() & writableHDFSDirPermission 
> .toShort()) == writableHDFSDirPermission.toShort())) { 
> throw new RuntimeException("The root scratch dir: " + rootHDFSDirPath 
> + " on HDFS should be writable. Current permissions are: " + currentHDFSDirPermission); 
> } 
> {code}
> >> Error message :
> {code:java}
> 2017-08-23 09:56:13,965 WARN [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(508)) - Error starting HiveServer2 on attempt 1, will retry in 60 seconds 
> java.lang.RuntimeException: Error applying authorization policy on hive configuration: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwxr-x--- 
> at org.apache.hive.service.cli.CLIService.init(CLIService.java:117) 
> at org.apache.hive.service.CompositeService.init(CompositeService.java:59) 
> at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:122) 
> at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:474) 
> at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:87) 
> at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:720) 
> at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:593) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
> at java.lang.reflect.Method.invoke(Method.java:498) 
> at org.apache.hadoop.util.RunJar.run(RunJar.java:233) 
> at org.apache.hadoop.util.RunJar.main(RunJar.java:148) 
> Caused by: java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwxr-x--- 
> at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:547) 
> at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:130) 
> at org.apache.hive.service.cli.CLIService.init(CLIService.java:115) 
> ... 12 more 
> Caused by: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwxr-x--- 
> at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:648) 
> at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:580) 
> at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:533) 
> ... 14 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)