You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Andrew Ash (JIRA)" <ji...@apache.org> on 2014/05/12 22:58:15 UTC

[jira] [Created] (SPARK-1809) Mesos backend doesn't respect HADOOP_CONF_DIR

Andrew Ash created SPARK-1809:
---------------------------------

             Summary: Mesos backend doesn't respect HADOOP_CONF_DIR
                 Key: SPARK-1809
                 URL: https://issues.apache.org/jira/browse/SPARK-1809
             Project: Spark
          Issue Type: Bug
          Components: Mesos
    Affects Versions: 1.0.0
            Reporter: Andrew Ash


In order to use HDFS paths without the server component, standalone mode reads spark-env.sh and scans the HADOOP_CONF_DIR to open core-site.xml and get the fs.default.name parameter.

This lets you use HDFS paths like:
- hdfs:///tmp/myfile.txt
instead of
- hdfs://myserver.mydomain.com:8020/tmp/myfile.txt

However as of recent 1.0.0 pre-release (hash 756c96) I had to specify HDFS paths with the full server even though I have HADOOP_CONF_DIR still set in spark-env.sh.  The HDFS, Spark, and Mesos nodes are all co-located and non-domain HDFS paths work fine when using the standalone mode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)