You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sohaib Iftikhar (JIRA)" <ji...@apache.org> on 2016/01/06 13:03:39 UTC
[jira] [Created] (SPARK-12676) There is no way to stop a
spark-streaming job from a worker in case of errors.
Sohaib Iftikhar created SPARK-12676:
---------------------------------------
Summary: There is no way to stop a spark-streaming job from a worker in case of errors.
Key: SPARK-12676
URL: https://issues.apache.org/jira/browse/SPARK-12676
Project: Spark
Issue Type: Improvement
Components: Streaming
Affects Versions: 1.6.0
Environment: All operating systems
Reporter: Sohaib Iftikhar
Priority: Critical
Consider and application that reads data from an external source and writes it to HDFS. If for some reason the namenode crashes the job keeps reading data from that source but does throws errors upon writing to HDFS which eventually leads to data loss. It would be desirable to supply some switch to the workers which would kill the job which could trigger some alert to the admin of the system and the job can be restarted when problem with the name node is fixed.
See related question here: http://stackoverflow.com/questions/34195453/sparkstreaming-shut-down-job-in-case-of-error
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org