You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Ethan Aubin (JIRA)" <ji...@apache.org> on 2016/10/14 15:44:20 UTC

[jira] [Created] (HIVE-14961) HDFS dir permission check in SessionState. createRootHDFSDir

Ethan Aubin created HIVE-14961:
----------------------------------

             Summary: HDFS dir permission check in SessionState. createRootHDFSDir
                 Key: HIVE-14961
                 URL: https://issues.apache.org/jira/browse/HIVE-14961
             Project: Hive
          Issue Type: Bug
          Components: Hive
    Affects Versions: 1.2.1
            Reporter: Ethan Aubin


SessionState.createRootHDFSDir creates the scratch directory and fails if it's not 733.  (See https://github.com/apache/hive/blob/ff67cdda1c538dc65087878eeba3e165cf3230f4/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java#L687-L709) .  

I'd like to run spark on a linux cluster where `other` permissions in tmp are not allowed for security reasons. Spark fails when it initializes Hive (the same behavior is seen in https://issues.apache.org/jira/browse/SPARK-10528).

Not knowing Hive very well, is there a reason why the 730 or 700 would not work? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)