You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by Andrew Onischuk <ao...@hortonworks.com> on 2015/04/27 16:53:30 UTC
Review Request 33582: Incorrect configuration of spark-defaults.conf
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33582/
-----------------------------------------------------------
Review request for Ambari and Vitalyi Brodetskyi.
Bugs: AMBARI-10764
https://issues.apache.org/jira/browse/AMBARI-10764
Repository: ambari
Description
-------
Due to configuration issue in spark-defaults.conf, All Spark applications
fails to start containers.
Stack trace: ExitCodeException exitCode=1: /grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1429516150624_0124/container_1429516150624_0124_02_000003/launch_container.sh: line 14: $PWD:$PWD/__spark__.jar:$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure:$PWD/__app__.jar:$PWD/*: bad substitution
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Issues with spark-defaults.conf
* Does not set values for spark.driver.extraJavaOptions and spark.yarn.am.extraJavaOptions
**correct config value**
spark.yarn.am.extraJavaOptions -Dhdp.version=2.3.0.0-1644
spark.driver.extraJavaOptions -Dhdp.version=2.3.0.0-1644
* spark.yarn.historyServer.address property is not set
**correct config value**
spark.yarn.historyServer.address os-amb-r6-us-1429252813-spark-2.novalocal:18080
* new spark config does not set spark.yarn.max_executor.failures and spark.yarn.services property. Is it expected? zzhang can you please confirm this?
Attaching current and expected Spark-defaults.conf.
Diffs
-----
ambari-server/src/main/resources/common-services/SPARK/1.2.0.2.2/configuration/spark-defaults.xml 2aa2b4e
Diff: https://reviews.apache.org/r/33582/diff/
Testing
-------
mvn clean test
Thanks,
Andrew Onischuk
Re: Review Request 33582: Incorrect configuration of
spark-defaults.conf
Posted by Vitalyi Brodetskyi <vb...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33582/#review81682
-----------------------------------------------------------
Ship it!
Ship It!
- Vitalyi Brodetskyi
On Квітень 27, 2015, 2:53 після полудня, Andrew Onischuk wrote:
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33582/
> -----------------------------------------------------------
>
> (Updated Квітень 27, 2015, 2:53 після полудня)
>
>
> Review request for Ambari and Vitalyi Brodetskyi.
>
>
> Bugs: AMBARI-10764
> https://issues.apache.org/jira/browse/AMBARI-10764
>
>
> Repository: ambari
>
>
> Description
> -------
>
> Due to configuration issue in spark-defaults.conf, All Spark applications
> fails to start containers.
>
>
>
>
> Stack trace: ExitCodeException exitCode=1: /grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1429516150624_0124/container_1429516150624_0124_02_000003/launch_container.sh: line 14: $PWD:$PWD/__spark__.jar:$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure:$PWD/__app__.jar:$PWD/*: bad substitution
>
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
> at org.apache.hadoop.util.Shell.run(Shell.java:456)
> at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
> at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
> at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
> at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
>
>
> Issues with spark-defaults.conf
>
> * Does not set values for spark.driver.extraJavaOptions and spark.yarn.am.extraJavaOptions
>
> **correct config value**
>
>
> spark.yarn.am.extraJavaOptions -Dhdp.version=2.3.0.0-1644
> spark.driver.extraJavaOptions -Dhdp.version=2.3.0.0-1644
>
>
> * spark.yarn.historyServer.address property is not set
>
> **correct config value**
>
>
> spark.yarn.historyServer.address os-amb-r6-us-1429252813-spark-2.novalocal:18080
>
>
> * new spark config does not set spark.yarn.max_executor.failures and spark.yarn.services property. Is it expected? zzhang can you please confirm this?
>
> Attaching current and expected Spark-defaults.conf.
>
>
> Diffs
> -----
>
> ambari-server/src/main/resources/common-services/SPARK/1.2.0.2.2/configuration/spark-defaults.xml 2aa2b4e
>
> Diff: https://reviews.apache.org/r/33582/diff/
>
>
> Testing
> -------
>
> mvn clean test
>
>
> Thanks,
>
> Andrew Onischuk
>
>