You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Behroz Sikander <be...@gmail.com> on 2017/03/23 10:46:28 UTC

[Worker Crashing] OutOfMemoryError: GC overhead limit execeeded

Hello,
Spark version: 1.6.2
Hadoop: 2.6.0

Cluster:
All VMS are deployed on AWS.
1 Master (t2.large)
1 Secondary Master (t2.large)
5 Workers (m4.xlarge)
Zookeeper (t2.large)

Recently, 2 of our workers went down with out of memory exception.

> java.lang.OutOfMemoryError: GC overhead limit exceeded (max heap: 1024 MB)


Both of these worker processes were in hanged state. We restarted them to
bring them back to normal state.

Here is the complete exception
https://gist.github.com/bsikander/84f1a0f3cc831c7a120225a71e435d91

Master's spark-default.conf file:
https://gist.github.com/bsikander/4027136f6a6c91eabad576495c4d797d

Master's spark-env.sh
https://gist.github.com/bsikander/42f76d7a8e4079098d8a2df3cdee8ee0

Slave's spark-default.conf file:
https://gist.github.com/bsikander/54264349b49e6227c6912eb14d344b8c

So, what could be the reason of our workers crashing due to OutOfMemory ?
How can we avoid that in future.

Regards,
Behroz