You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by "Alex Turner (TMS)" <al...@toyota.com> on 2015/03/18 00:46:52 UTC
Memory Settings for local execution context
So the page that talks about settings: http://spark.apache.org/docs/1.2.1/configuration.html seems to not apply when running local contexts. I have a shell script that starts my job:
xport SPARK_MASTER_OPTS="-Dsun.io.serialization.extendedDebugInfo=true"
export SPARK_WORKER_OPTS="-Dsun.io.serialization.extendedDebugInfo=true"
/Users/spark/spark/bin/spark-submit \
--class jobs.MyJob \
--master local[1] \
--conf spark.executor.memory=8g \
--conf spark.serializer=org.apache.spark.serializer.KryoSerializer \
--conf spark.driver.memory=10g \
--conf spark.executor.extraJavaOptions="-Dsun.io.serialization.extendedDebugInfo=true" \
target/scala-2.10/my-job.jar
And when I largely remove spark-defaults.conf and spark-env.sh, I get a running job that has only 265MB for Memory for an executor! I have no setting specified inside the jar for the SparkConf object as far as I can tell.
How can I get my executor memory up to be nice and big?
Thanks,
Alex