You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by shyla deshpande <de...@gmail.com> on 2018/01/30 16:52:37 UTC

spark job error

I am running Zeppelin on EMR. with the default settings.  I am getting the
following error. Restarting the Zeppelin application fixes the problem.

What default settings do I need to override that will help fix this error.

org.apache.spark.SparkException: Job aborted due to stage failure: Task 71
in stage 231.0 failed 4 times, most recent failure: Lost task 71.3 in stage
231.0 Reason: Container killed by YARN for exceeding memory limits. 1.4 GB
of 1.4 GB physical memory used. Consider boosting
spark.yarn.executor.memoryOverhead.

Thanks

Re: spark job error

Posted by Jacek Laskowski <ja...@japila.pl>.
Hi,

Start with spark.executor.memory 2g. You may also
give spark.yarn.executor.memoryOverhead a try.

See https://spark.apache.org/docs/latest/configuration.html and
https://spark.apache.org/docs/latest/running-on-yarn.html for more in-depth
information.

Pozdrawiam,
Jacek Laskowski
----
https://about.me/JacekLaskowski
Mastering Spark SQL https://bit.ly/mastering-spark-sql
Spark Structured Streaming https://bit.ly/spark-structured-streaming
Mastering Kafka Streams https://bit.ly/mastering-kafka-streams
Follow me at https://twitter.com/jaceklaskowski

On Tue, Jan 30, 2018 at 5:52 PM, shyla deshpande <de...@gmail.com>
wrote:

> I am running Zeppelin on EMR. with the default settings.  I am getting the
> following error. Restarting the Zeppelin application fixes the problem.
>
> What default settings do I need to override that will help fix this error.
>
> org.apache.spark.SparkException: Job aborted due to stage failure: Task
> 71 in stage 231.0 failed 4 times, most recent failure: Lost task 71.3 in
> stage 231.0 Reason: Container killed by YARN for exceeding memory limits.
> 1.4 GB of 1.4 GB physical memory used. Consider boosting
> spark.yarn.executor.memoryOverhead.
>
> Thanks
>
>