You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "hzw (JIRA)" <ji...@apache.org> on 2014/08/19 09:10:18 UTC
[jira] [Created] (SPARK-3120) Local Dirs is not useful in
yarn-client mode
hzw created SPARK-3120:
--------------------------
Summary: Local Dirs is not useful in yarn-client mode
Key: SPARK-3120
URL: https://issues.apache.org/jira/browse/SPARK-3120
Project: Spark
Issue Type: Bug
Components: Spark Core, YARN
Affects Versions: 1.0.2
Environment: Spark 1.0.2
Yarn 2.3.0
Reporter: hzw
I was using spark1.0.2 and hadoop 2.3.0 to run a spark application on yarn.
I was excepted to set the spark.local.dir to separate the shuffle files to many disks, so I exported LOCAL_DIRS in Spark-env.sh.
But it failed to create the local dirs in my specify path.
It just go to the path in "/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/" as the hadoop default path.
To reappear this:
1.Do not set the “yarn.nodemanager.local-dirs” in yarn-site.xml which influence the result.
2.run a job and then find the executor log at the INFO "DiskBlockManager: Created local directory at ......"
Inaddtion, I tried to add the "exported LOCAL_DIRS" in yarn-env.sh. It will lanch the LOCAL_DIRS value in the ExecutorLancher and it still would be overwrite by yarn in lanching the executor container.
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org