You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Kostas Kougios <ko...@googlemail.com> on 2015/07/07 15:14:01 UTC

is it possible to disable -XX:OnOutOfMemoryError=kill %p for the executors?

I get a suspicious sigterm on the executors that doesnt seem to be from the
driver. The other thing that might send a sigterm is the
-XX:OnOutOfMemoryError=kill %p java arg that the executor starts with. Now
my tasks dont seem to run out of mem, so how can I disable this param to
debug them?



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/is-it-possible-to-disable-XX-OnOutOfMemoryError-kill-p-for-the-executors-tp23680.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: is it possible to disable -XX:OnOutOfMemoryError=kill %p for the executors?

Posted by Konstantinos Kougios <ko...@googlemail.com>.
seems you're correct:

2015-07-07 17:21:27,245 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: 
Container [pid=38506,containerID=container_1436262805092_0022_01_000003] 
is running be
yond virtual memory limits. Current usage: 4.3 GB of 4.5 GB physical 
memory used; 9.5 GB of 9.4 GB virtual memory used. Killing container.



On 07/07/15 18:28, Marcelo Vanzin wrote:
> SIGTERM on YARN generally means the NM is killing your executor 
> because it's running over its requested memory limits. Check your NM 
> logs to make sure. And then take a look at the "memoryOverhead" 
> setting for driver and executors 
> (http://spark.apache.org/docs/latest/running-on-yarn.html).
>
> On Tue, Jul 7, 2015 at 7:43 AM, Kostas Kougios 
> <kostas.kougios@googlemail.com <ma...@googlemail.com>> 
> wrote:
>
>     I've recompiled spark deleting the -XX:OnOutOfMemoryError=kill
>     declaration,
>     but still I am getting a SIGTERM!
>
>
>
>     --
>     View this message in context:
>     http://apache-spark-user-list.1001560.n3.nabble.com/is-it-possible-to-disable-XX-OnOutOfMemoryError-kill-p-for-the-executors-tp23680p23687.html
>     Sent from the Apache Spark User List mailing list archive at
>     Nabble.com.
>
>     ---------------------------------------------------------------------
>     To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>     <ma...@spark.apache.org>
>     For additional commands, e-mail: user-help@spark.apache.org
>     <ma...@spark.apache.org>
>
>
>
>
> -- 
> Marcelo


Re: is it possible to disable -XX:OnOutOfMemoryError=kill %p for the executors?

Posted by Marcelo Vanzin <va...@cloudera.com>.
SIGTERM on YARN generally means the NM is killing your executor because
it's running over its requested memory limits. Check your NM logs to make
sure. And then take a look at the "memoryOverhead" setting for driver and
executors (http://spark.apache.org/docs/latest/running-on-yarn.html).

On Tue, Jul 7, 2015 at 7:43 AM, Kostas Kougios <
kostas.kougios@googlemail.com> wrote:

> I've recompiled spark deleting the -XX:OnOutOfMemoryError=kill declaration,
> but still I am getting a SIGTERM!
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/is-it-possible-to-disable-XX-OnOutOfMemoryError-kill-p-for-the-executors-tp23680p23687.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>


-- 
Marcelo

Re: is it possible to disable -XX:OnOutOfMemoryError=kill %p for the executors?

Posted by Kostas Kougios <ko...@googlemail.com>.
I've recompiled spark deleting the -XX:OnOutOfMemoryError=kill declaration,
but still I am getting a SIGTERM! 



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/is-it-possible-to-disable-XX-OnOutOfMemoryError-kill-p-for-the-executors-tp23680p23687.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: is it possible to disable -XX:OnOutOfMemoryError=kill %p for the executors?

Posted by Kostas Kougios <ko...@googlemail.com>.
it seems it is hardcoded in ExecutorRunnable.scala :

val commands = prefixEnv ++ Seq(
      YarnSparkHadoopUtil.expandEnvironment(Environment.JAVA_HOME) +
"/bin/java",
      "-server",
      // Kill if OOM is raised - leverage yarn's failure handling to cause
rescheduling.
      // Not killing the task leaves various aspects of the executor and (to
some extent) the jvm in
      // an inconsistent state.
      // TODO: If the OOM is not recoverable by rescheduling it on different
node, then do
      // 'something' to fail job ... akin to blacklisting trackers in mapred
?
      "-XX:OnOutOfMemoryError='kill %p'") ++



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/is-it-possible-to-disable-XX-OnOutOfMemoryError-kill-p-for-the-executors-tp23680p23681.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org