You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Prabhu Joseph <pr...@gmail.com> on 2016/02/02 15:21:56 UTC

Spark saveAsHadoopFile stage fails with ExecutorLostfailure

Hi All,

   Spark job stage having saveAsHadoopFile fails with ExecutorLostFailure
whenever the Executor is run with more cores. The stage is not memory
intensive, executor has 20GB memory. for example,

6 executors each with 6 cores, ExecutorLostFailure happens

10 executors each with 2 cores, saveAsHadoopFile runs fine.

What could be the reason for ExecutorLostFailure failing when cores per
executor is high.



Error: ExecutorLostFailure (executor 3 lost)

16/02/02 04:22:40 WARN TaskSetManager: Lost task 1.3 in stage 15.0 (TID
1318, hdnprd-c01-r01-14):



Thanks,
Prabhu Joseph