You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by jamborta <ja...@gmail.com> on 2014/09/26 03:02:21 UTC
Job cancelled because SparkContext was shut down
hi all,
I am getting this strange error about half way through the job (running
spark 1.1 on yarn client mode):
14/09/26 00:54:06 INFO ConnectionManager: key already cancelled ?
sun.nio.ch.SelectionKeyImpl@4d0155fb
java.nio.channels.CancelledKeyException
at
org.apache.spark.network.ConnectionManager.run(ConnectionManager.scala:386)
at
org.apache.spark.network.ConnectionManager$$anon$4.run(ConnectionManager.scala:139)
14/09/26 00:54:06 INFO YarnClientSchedulerBackend: Executor 1 disconnected,
so removing it
then a few minutes later the whole process dies:
14/09/26 01:00:12 ERROR YarnClientSchedulerBackend: Yarn application already
ended: FINISHED
14/09/26 01:00:13 INFO SparkUI: Stopped Spark web UI at
http://backend-dev:4040
14/09/26 01:00:13 INFO YarnClientSchedulerBackend: Shutting down all
executors
14/09/26 01:00:13 INFO YarnClientSchedulerBackend: Asking each executor to
shut down
[E 140926 01:00:13 base:56] Request failed
14/09/26 01:00:13 INFO YarnClientSchedulerBackend: Stopped
[E 140926 01:00:13 base:57] {'error_msg': "<type 'exceptions.Exception'>,
org.apache.spark.SparkException: Job cancelled because SparkContext was shut
down, <traceback object at 0x4483cb0>"}
any idea what's going on here?
thanks,
--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Job-cancelled-because-SparkContext-was-shut-down-tp15189.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org
Re: Job cancelled because SparkContext was shut down
Posted by jamborta <ja...@gmail.com>.
Just wanted to answer my question in case someone else runs into the same
problem.
It is related to the problem discussed here:
http://apache-spark-developers-list.1001551.n3.nabble.com/Lost-executor-on-YARN-ALS-iterations-td7916.html
and here:
https://issues.apache.org/jira/browse/SPARK-2121
seems yarn kills some of the executors as they request more memory than
expected.
--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Job-cancelled-because-SparkContext-was-shut-down-tp15189p15216.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org