You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Oleksandr Shevchenko (JIRA)" <ji...@apache.org> on 2018/09/10 18:49:01 UTC

[jira] [Resolved] (HADOOP-15725) FileSystem.deleteOnExit should check user permissions

     [ https://issues.apache.org/jira/browse/HADOOP-15725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Oleksandr Shevchenko resolved HADOOP-15725.
-------------------------------------------
    Resolution: Invalid

> FileSystem.deleteOnExit should check user permissions
> -----------------------------------------------------
>
>                 Key: HADOOP-15725
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15725
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Oleksandr Shevchenko
>            Priority: Major
>              Labels: Security
>         Attachments: deleteOnExitReproduce
>
>
> For now, we able to add any file to FileSystem deleteOnExit list. It leads to security problems. Some user (Intruder) can get file system instance which was created by another user (Owner) and mark any files to delete even if "Intruder" doesn't have any access to this files. Later when "Owner" invoke close method (or JVM is shut down since we have ShutdownHook which able to close all file systems) marked files will be deleted successfully since deleting was do behalf of "Owner" (or behalf of a user who ran a program).
> I attached the patch [^deleteOnExitReproduce] which reproduces this possibility and also I able to reproduce it on a cluster with both Local and Distributed file systems:
> {code:java}
> public class Main {
> public static void main(String[] args) throws Exception {
> final FileSystem fs;
>  Configuration conf = new Configuration();
>  conf.set("fs.default.name", "hdfs://node:9000");
>  conf.set("fs.hdfs.impl",
>  org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()
>  );
>  fs = FileSystem.get(conf);
>  System.out.println(fs);
> Path f = new Path("/user/root/testfile");
>  System.out.println(f);
> UserGroupInformation hive = UserGroupInformation.createRemoteUser("hive");
> hive.doAs((PrivilegedExceptionAction<Boolean>) () -> fs.deleteOnExit(f));
> fs.close();
>  }
> {code}
> Result:
> {noformat}
> root@node:/# hadoop fs -put testfile /user/root
> root@node:/# hadoop fs -chmod 700 /user/root/testfile
> root@node:/# hadoop fs -ls /user/root
> Found 1 items
> -rw------- 1 root supergroup 0 2018-09-06 18:07 /user/root/testfile
> root@node:/# java -jar testDeleteOther.jar 
> log4j:WARN No appenders could be found for logger (org.apache.hadoop.conf.Configuration.deprecation).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
> DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_309539034_1, ugi=root (auth:SIMPLE)]]
> /user/root/testfile
> []
> root@node:/# hadoop fs -ls /user/root
> root@node:/# 
> {noformat}
> We should add a check user permissions before mark a file to delete. 
>  Could someone evaluate this? And if no one objects I would like to start working on this.
>  Thanks a lot for any comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org