You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Jyun-Fan Tsai <jy...@gmail.com> on 2014/01/09 10:47:31 UTC

How to increase spark.worker.timeout?

Hi
We use standalone deploy master.  When we running jobs we usually see some
worker Dead.  To resolve the problem we set "spark.worker.timeout" to 600
seconds in spark-env.sh:

SPARK_JAVA_OPTS+=" -Dspark.worker.timeout=600"

However, we still see such error:

"14/01/09 09:28:27 WARN master.Master: Removing worker-ABC because we got
no heartbeat in 60 seconds"

The error message shows that the timeout is still 60 seconds, the default
value.  Any advice to set timeout correctly?

The spark version we use is 0.8.1

-- 
Regards,
Jyun-Fan Tsai