You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by shyla deshpande <de...@gmail.com> on 2018/01/30 16:53:00 UTC

spark job error

I am running Zeppelin on EMR. with the default settings.  I am getting the
following error. Restarting the Zeppelin application fixes the problem.

What default settings do I need to override that will help fix this error.

org.apache.spark.SparkException: Job aborted due to stage failure: Task 71
in stage 231.0 failed 4 times, most recent failure: Lost task 71.3 in stage
231.0 Reason: Container killed by YARN for exceeding memory limits. 1.4 GB
of 1.4 GB physical memory used. Consider boosting
spark.yarn.executor.memoryOverhead.

Thanks