You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Andrew Onischuk (JIRA)" <ji...@apache.org> on 2015/04/28 09:35:07 UTC

[jira] [Resolved] (AMBARI-10764) Incorrect configuration of spark-defaults.conf

     [ https://issues.apache.org/jira/browse/AMBARI-10764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Onischuk resolved AMBARI-10764.
--------------------------------------
    Resolution: Fixed

Committed to trunk

> Incorrect configuration of spark-defaults.conf
> ----------------------------------------------
>
>                 Key: AMBARI-10764
>                 URL: https://issues.apache.org/jira/browse/AMBARI-10764
>             Project: Ambari
>          Issue Type: Bug
>            Reporter: Andrew Onischuk
>            Assignee: Andrew Onischuk
>             Fix For: 2.1.0
>
>
> Due to configuration issue in spark-defaults.conf, All Spark applications
> fails to start containers.
>     
>     
>     
>     Stack trace: ExitCodeException exitCode=1: /grid/0/hadoop/yarn/local/usercache/hrt_qa/appcache/application_1429516150624_0124/container_1429516150624_0124_02_000003/launch_container.sh: line 14: $PWD:$PWD/__spark__.jar:$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure:$PWD/__app__.jar:$PWD/*: bad substitution
>     
>             at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
>             at org.apache.hadoop.util.Shell.run(Shell.java:456)
>             at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
>             at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
>             at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
>             at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
>             at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>             at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>             at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>             at java.lang.Thread.run(Thread.java:745)
>     
>     
>     Container exited with a non-zero exit code 1
>     
> Issues with spark-defaults.conf
>   * Does not set values for spark.driver.extraJavaOptions and spark.yarn.am.extraJavaOptions 
> **correct config value**
>     
>         
>     spark.yarn.am.extraJavaOptions    -Dhdp.version=2.3.0.0-1644 
>     spark.driver.extraJavaOptions     -Dhdp.version=2.3.0.0-1644 
>     
>   * spark.yarn.historyServer.address property is not set 
> **correct config value**
>     
>         
>     spark.yarn.historyServer.address         os-amb-r6-us-1429252813-spark-2.novalocal:18080
>     
>   * new spark config does not set spark.yarn.max_executor.failures and spark.yarn.services property. Is it expected? zzhang can you please confirm this?
> Attaching current and expected Spark-defaults.conf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)