You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Sophia <sl...@163.com> on 2014/06/13 09:46:18 UTC

Spark 1.0.0 on yarn cluster problem

With the yarn-client mode,I submit a job from client to yarn,and the spark
file spark-env.sh:
export HADOOP_HOME=/usr/lib/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
SPARK_EXECUTOR_INSTANCES=4
SPARK_EXECUTOR_CORES=1
SPARK_EXECUTOR_MEMORY=1G
SPARK_DRIVER_MEMORY=2G
SPARK_YARN_APP_NAME="Spark 1.0.0"

the command line and the result:
 $export JAVA_HOME=/usr/java/jdk1.7.0_45/
$export PATH=$JAVA_HOME/bin:$PATH
$  ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
yarn-client
./bin/spark-submit: line 44: /usr/lib/spark/bin/spark-class: Success
How can I do with it? The yarn only accept the job but it cannot give memory
to the job.Why?




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-0-0-on-yarn-cluster-problem-tp7560.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: Spark 1.0.0 on yarn cluster problem

Posted by Andrew Or <an...@databricks.com>.
Hi Sophia, did you ever resolve this?

A common cause for not giving resources to the job is that the RM cannot
communicate with the workers.
This itself has many possible causes. Do you have a full stack trace from
the logs?

Andrew


2014-06-13 0:46 GMT-07:00 Sophia <sl...@163.com>:

> With the yarn-client mode,I submit a job from client to yarn,and the spark
> file spark-env.sh:
> export HADOOP_HOME=/usr/lib/hadoop
> export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
> SPARK_EXECUTOR_INSTANCES=4
> SPARK_EXECUTOR_CORES=1
> SPARK_EXECUTOR_MEMORY=1G
> SPARK_DRIVER_MEMORY=2G
> SPARK_YARN_APP_NAME="Spark 1.0.0"
>
> the command line and the result:
>  $export JAVA_HOME=/usr/java/jdk1.7.0_45/
> $export PATH=$JAVA_HOME/bin:$PATH
> $  ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
> yarn-client
> ./bin/spark-submit: line 44: /usr/lib/spark/bin/spark-class: Success
> How can I do with it? The yarn only accept the job but it cannot give
> memory
> to the job.Why?
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-0-0-on-yarn-cluster-problem-tp7560.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Re: Spark 1.0.0 on yarn cluster problem

Posted by firemonk9 <dh...@gmail.com>.
 Yes export worked. 

Thank you



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-0-0-on-yarn-cluster-problem-tp7560p17180.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Spark 1.0.0 on yarn cluster problem

Posted by Andrew Or <an...@databricks.com>.
Did you `export` the environment variables? Also, are you running in client
mode or cluster mode? If it still doesn't work you can try to set these
through the spark-submit command lines --num-executors, --executor-cores,
and --executor-memory.

2014-10-23 19:25 GMT-07:00 firemonk9 <dh...@gmail.com>:

> Hi,
>
>    I am facing same problem. My spark-env.sh has below entries yet I see
> the
> yarn container with only 1G and yarn only spawns two workers.
>
> SPARK_EXECUTOR_CORES=1
> SPARK_EXECUTOR_MEMORY=3G
> SPARK_EXECUTOR_INSTANCES=5
>
> Please let me know if you are able to resolve this issue.
>
> Thank you
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-0-0-on-yarn-cluster-problem-tp7560p17175.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Re: Spark 1.0.0 on yarn cluster problem

Posted by firemonk9 <dh...@gmail.com>.
Hi,

   I am facing same problem. My spark-env.sh has below entries yet I see the
yarn container with only 1G and yarn only spawns two workers. 

SPARK_EXECUTOR_CORES=1
SPARK_EXECUTOR_MEMORY=3G
SPARK_EXECUTOR_INSTANCES=5

Please let me know if you are able to resolve this issue.

Thank you



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-0-0-on-yarn-cluster-problem-tp7560p17175.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org