You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by prateek arora <pr...@gmail.com> on 2016/06/01 17:55:32 UTC

How to enable core dump in spark

Hi 

I am using cloudera to  setup spark 1.6.0  on ubuntu 14.04 .

I set core dump limit to unlimited in all nodes . 
   Edit  /etc/security/limits.conf file and add  " * soft core unlimited "
line.

i rechecked  using :  $ ulimit -all

core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 241204
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 241204
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

but when I am running my spark application with some third party native
libraries . but it crashes some time and show error " Failed to write core
dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c
unlimited" before starting Java again " . 

Below are the log : 

 A fatal error has been detected by the Java Runtime Environment: 
# 
#  SIGSEGV (0xb) at pc=0x00007fd44b491fb9, pid=20458, tid=140549318547200 
# 
# JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build
1.7.0_67-b01) 
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode
linux-amd64 compressed oops) 
# Problematic frame: 
# V  [libjvm.so+0x650fb9]  jni_SetByteArrayRegion+0xa9 
# 
# Failed to write core dump. Core dumps have been disabled. To enable core
dumping, try "ulimit -c unlimited" before starting Java again 
# 
# An error report file with more information is saved as: 
#
/yarn/nm/usercache/master/appcache/application_1462930975871_0004/container_1462930975871_0004_01_000066/hs_err_pid20458.log 
# 
# If you would like to submit a bug report, please visit: 
#   http://bugreport.sun.com/bugreport/crash.jsp
# 


so how can i enable core dump and save it some place ? 

Regards
Prateek



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/How-to-enable-core-dump-in-spark-tp27065.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: How to enable core dump in spark

Posted by Jacek Laskowski <ja...@japila.pl>.
What about the user of NodeManagers?

Pozdrawiam,
Jacek Laskowski
----
https://medium.com/@jaceklaskowski/
Mastering Apache Spark http://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski


On Thu, Jun 16, 2016 at 10:51 PM, prateek arora
<pr...@gmail.com> wrote:
> hi
>
> I am using spark with yarn .  how can i  make sure that the ulimit settings
> are applied to the Spark process ?
>
> I set core dump limit to unlimited in all nodes .
>    Edit  /etc/security/limits.conf file and add  " * soft core unlimited "
> line.
>
> i rechecked  using :  $ ulimit -all
>
> core file size          (blocks, -c) unlimited
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 241204
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 241204
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
>
> Regards
> Prateek
>
>
> On Thu, Jun 16, 2016 at 4:46 AM, Jacek Laskowski <ja...@japila.pl> wrote:
>>
>> Hi,
>>
>> Can you make sure that the ulimit settings are applied to the Spark
>> process? Is this Spark on YARN or Standalone?
>>
>> Pozdrawiam,
>> Jacek Laskowski
>> ----
>> https://medium.com/@jaceklaskowski/
>> Mastering Apache Spark http://bit.ly/mastering-apache-spark
>> Follow me at https://twitter.com/jaceklaskowski
>>
>>
>> On Wed, Jun 1, 2016 at 7:55 PM, prateek arora
>> <pr...@gmail.com> wrote:
>> > Hi
>> >
>> > I am using cloudera to  setup spark 1.6.0  on ubuntu 14.04 .
>> >
>> > I set core dump limit to unlimited in all nodes .
>> >    Edit  /etc/security/limits.conf file and add  " * soft core unlimited
>> > "
>> > line.
>> >
>> > i rechecked  using :  $ ulimit -all
>> >
>> > core file size          (blocks, -c) unlimited
>> > data seg size           (kbytes, -d) unlimited
>> > scheduling priority             (-e) 0
>> > file size               (blocks, -f) unlimited
>> > pending signals                 (-i) 241204
>> > max locked memory       (kbytes, -l) 64
>> > max memory size         (kbytes, -m) unlimited
>> > open files                      (-n) 1024
>> > pipe size            (512 bytes, -p) 8
>> > POSIX message queues     (bytes, -q) 819200
>> > real-time priority              (-r) 0
>> > stack size              (kbytes, -s) 8192
>> > cpu time               (seconds, -t) unlimited
>> > max user processes              (-u) 241204
>> > virtual memory          (kbytes, -v) unlimited
>> > file locks                      (-x) unlimited
>> >
>> > but when I am running my spark application with some third party native
>> > libraries . but it crashes some time and show error " Failed to write
>> > core
>> > dump. Core dumps have been disabled. To enable core dumping, try "ulimit
>> > -c
>> > unlimited" before starting Java again " .
>> >
>> > Below are the log :
>> >
>> >  A fatal error has been detected by the Java Runtime Environment:
>> > #
>> > #  SIGSEGV (0xb) at pc=0x00007fd44b491fb9, pid=20458,
>> > tid=140549318547200
>> > #
>> > # JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build
>> > 1.7.0_67-b01)
>> > # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode
>> > linux-amd64 compressed oops)
>> > # Problematic frame:
>> > # V  [libjvm.so+0x650fb9]  jni_SetByteArrayRegion+0xa9
>> > #
>> > # Failed to write core dump. Core dumps have been disabled. To enable
>> > core
>> > dumping, try "ulimit -c unlimited" before starting Java again
>> > #
>> > # An error report file with more information is saved as:
>> > #
>> >
>> > /yarn/nm/usercache/master/appcache/application_1462930975871_0004/container_1462930975871_0004_01_000066/hs_err_pid20458.log
>> > #
>> > # If you would like to submit a bug report, please visit:
>> > #   http://bugreport.sun.com/bugreport/crash.jsp
>> > #
>> >
>> >
>> > so how can i enable core dump and save it some place ?
>> >
>> > Regards
>> > Prateek
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> > http://apache-spark-user-list.1001560.n3.nabble.com/How-to-enable-core-dump-in-spark-tp27065.html
>> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>> > For additional commands, e-mail: user-help@spark.apache.org
>> >
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: How to enable core dump in spark

Posted by prateek arora <pr...@gmail.com>.
hi

I am using spark with yarn .  how can i  make sure that the ulimit settings
are applied to the Spark process ?

I set core dump limit to unlimited in all nodes .
   Edit  /etc/security/limits.conf file and add  " * soft core unlimited "
line.

i rechecked  using :  $ ulimit -all

core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 241204
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 241204
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Regards
Prateek


On Thu, Jun 16, 2016 at 4:46 AM, Jacek Laskowski <ja...@japila.pl> wrote:

> Hi,
>
> Can you make sure that the ulimit settings are applied to the Spark
> process? Is this Spark on YARN or Standalone?
>
> Pozdrawiam,
> Jacek Laskowski
> ----
> https://medium.com/@jaceklaskowski/
> Mastering Apache Spark http://bit.ly/mastering-apache-spark
> Follow me at https://twitter.com/jaceklaskowski
>
>
> On Wed, Jun 1, 2016 at 7:55 PM, prateek arora
> <pr...@gmail.com> wrote:
> > Hi
> >
> > I am using cloudera to  setup spark 1.6.0  on ubuntu 14.04 .
> >
> > I set core dump limit to unlimited in all nodes .
> >    Edit  /etc/security/limits.conf file and add  " * soft core unlimited
> "
> > line.
> >
> > i rechecked  using :  $ ulimit -all
> >
> > core file size          (blocks, -c) unlimited
> > data seg size           (kbytes, -d) unlimited
> > scheduling priority             (-e) 0
> > file size               (blocks, -f) unlimited
> > pending signals                 (-i) 241204
> > max locked memory       (kbytes, -l) 64
> > max memory size         (kbytes, -m) unlimited
> > open files                      (-n) 1024
> > pipe size            (512 bytes, -p) 8
> > POSIX message queues     (bytes, -q) 819200
> > real-time priority              (-r) 0
> > stack size              (kbytes, -s) 8192
> > cpu time               (seconds, -t) unlimited
> > max user processes              (-u) 241204
> > virtual memory          (kbytes, -v) unlimited
> > file locks                      (-x) unlimited
> >
> > but when I am running my spark application with some third party native
> > libraries . but it crashes some time and show error " Failed to write
> core
> > dump. Core dumps have been disabled. To enable core dumping, try "ulimit
> -c
> > unlimited" before starting Java again " .
> >
> > Below are the log :
> >
> >  A fatal error has been detected by the Java Runtime Environment:
> > #
> > #  SIGSEGV (0xb) at pc=0x00007fd44b491fb9, pid=20458, tid=140549318547200
> > #
> > # JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build
> > 1.7.0_67-b01)
> > # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode
> > linux-amd64 compressed oops)
> > # Problematic frame:
> > # V  [libjvm.so+0x650fb9]  jni_SetByteArrayRegion+0xa9
> > #
> > # Failed to write core dump. Core dumps have been disabled. To enable
> core
> > dumping, try "ulimit -c unlimited" before starting Java again
> > #
> > # An error report file with more information is saved as:
> > #
> >
> /yarn/nm/usercache/master/appcache/application_1462930975871_0004/container_1462930975871_0004_01_000066/hs_err_pid20458.log
> > #
> > # If you would like to submit a bug report, please visit:
> > #   http://bugreport.sun.com/bugreport/crash.jsp
> > #
> >
> >
> > so how can i enable core dump and save it some place ?
> >
> > Regards
> > Prateek
> >
> >
> >
> > --
> > View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-enable-core-dump-in-spark-tp27065.html
> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> > For additional commands, e-mail: user-help@spark.apache.org
> >
>

Re: How to enable core dump in spark

Posted by Jacek Laskowski <ja...@japila.pl>.
Hi,

Can you make sure that the ulimit settings are applied to the Spark
process? Is this Spark on YARN or Standalone?

Pozdrawiam,
Jacek Laskowski
----
https://medium.com/@jaceklaskowski/
Mastering Apache Spark http://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski


On Wed, Jun 1, 2016 at 7:55 PM, prateek arora
<pr...@gmail.com> wrote:
> Hi
>
> I am using cloudera to  setup spark 1.6.0  on ubuntu 14.04 .
>
> I set core dump limit to unlimited in all nodes .
>    Edit  /etc/security/limits.conf file and add  " * soft core unlimited "
> line.
>
> i rechecked  using :  $ ulimit -all
>
> core file size          (blocks, -c) unlimited
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 241204
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 241204
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
>
> but when I am running my spark application with some third party native
> libraries . but it crashes some time and show error " Failed to write core
> dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c
> unlimited" before starting Java again " .
>
> Below are the log :
>
>  A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x00007fd44b491fb9, pid=20458, tid=140549318547200
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build
> 1.7.0_67-b01)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode
> linux-amd64 compressed oops)
> # Problematic frame:
> # V  [libjvm.so+0x650fb9]  jni_SetByteArrayRegion+0xa9
> #
> # Failed to write core dump. Core dumps have been disabled. To enable core
> dumping, try "ulimit -c unlimited" before starting Java again
> #
> # An error report file with more information is saved as:
> #
> /yarn/nm/usercache/master/appcache/application_1462930975871_0004/container_1462930975871_0004_01_000066/hs_err_pid20458.log
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.sun.com/bugreport/crash.jsp
> #
>
>
> so how can i enable core dump and save it some place ?
>
> Regards
> Prateek
>
>
>
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/How-to-enable-core-dump-in-spark-tp27065.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: How to enable core dump in spark

Posted by prateek arora <pr...@gmail.com>.
please help me to solve my problem

Regards
Prateek



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/How-to-enable-core-dump-in-spark-tp27065p27081.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org