You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Andrew Or (JIRA)" <ji...@apache.org> on 2014/11/06 00:59:33 UTC

[jira] [Resolved] (SPARK-3223) runAsSparkUser cannot change HDFS write permission properly in mesos cluster mode

     [ https://issues.apache.org/jira/browse/SPARK-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Or resolved SPARK-3223.
------------------------------
          Resolution: Fixed
       Fix Version/s:     (was: 1.1.0)
                      1.1.1
    Target Version/s: 1.1.1, 1.2.0  (was: 1.1.0, 1.2.0)

> runAsSparkUser cannot change HDFS write permission properly in mesos cluster mode
> ---------------------------------------------------------------------------------
>
>                 Key: SPARK-3223
>                 URL: https://issues.apache.org/jira/browse/SPARK-3223
>             Project: Spark
>          Issue Type: Bug
>          Components: Input/Output, Mesos
>    Affects Versions: 1.0.2
>            Reporter: Jongyoul Lee
>            Assignee: Jongyoul Lee
>            Priority: Critical
>             Fix For: 1.1.1, 1.2.0
>
>
> While running mesos with --no-switch_user option, HDFS account name is different from driver and executor. It makes a permission error at last stage. Executor's id is mesos' user id and driver's id is who runs spark-submit. So, moving output from _temporary/path/to/output/part-xxxx to /output/path/part-xxxx fails because of permission error. The solution for this is only setting SPARK_USER to HADOOP_USER_NAME when MesosExecutorBackend calls runAsSparkUser. HADOOP_USER_NAME is used when FileSystem get user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org