You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Piotr Kołaczkowski <pk...@datastax.com> on 2014/07/02 15:30:28 UTC

Re: How to terminate job from the task code?

SparkContext is not serializable and can't be just "sent across" ;)


2014-06-21 14:14 GMT+02:00 Mayur Rustagi <ma...@gmail.com>:

> You can terminate job group from spark context,  Youll have to send across
> the spark context to your task.
>  On 21 Jun 2014 01:09, "Piotr Kołaczkowski" <pk...@datastax.com> wrote:
>
>> If the task detects unrecoverable error, i.e. an error that we can't
>> expect to fix by retrying nor moving the task to another node, how to stop
>> the job / prevent Spark from retrying it?
>>
>> def process(taskContext: TaskContext, data: Iterator[T]) {
>>    ...
>>
>>    if (unrecoverableError) {
>>       ??? // terminate the job immediately
>>    }
>>    ...
>>  }
>>
>> Somewhere else:
>> rdd.sparkContext.runJob(rdd, something.process _)
>>
>>
>> Thanks,
>> Piotr
>>
>>
>> --
>> Piotr Kolaczkowski, Lead Software Engineer
>> pkolaczk@datastax.com
>>
>> http://www.datastax.com/
>> 777 Mariners Island Blvd., Suite 510
>> San Mateo, CA 94404
>>
>


-- 
Piotr Kolaczkowski, Lead Software Engineer
pkolaczk@datastax.com

http://www.datastax.com/
3975 Freedom Circle
Santa Clara, CA 95054, USA