You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Ferdy Galema <f....@gmail.com> on 2008/02/18 15:42:25 UTC

JVM core dumps unavailable because of temporary folders

Every once and a while some of our Tasks fail (not the Tracker, just the
Tasks), due to the JVM (jre1.6.0_04) crashing with exitcode 134. The logging
reports that it outputted a crashdump:

# An unexpected error has been detected by Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00002aaaaaeb5290, pid=25119, tid=1081145664
#
# Java VM: Java HotSpot(TM) 64-Bit Server VM (10.0-b19 mixed mode
linux-amd64)
# Problematic frame:
# V  [libjvm.so+0x2f5290]
#
# An error report file with more information is saved as:
#
/kalooga/filesystem/mapreduce/local/taskTracker/jobcache/job_200802151637_0006/task_200802151637_0006_m_000005_0/hs_err_pid25119.log
#
# If you would like to submit a bug report, please visit:
#  http://java.sun.com/webapps/bugreport/crash.jsp
#


The problem is that the dump does not exist at the specified location. My
bet is that Hadoop starts a new Task immediately after the failed one, which
results in the old jobcache to be deleted. Is it possible to keep this cache
instead?

Re: JVM core dumps unavailable because of temporary folders

Posted by Ferdy Galema <f....@gmail.com>.
Is it the wrong mailinglist? Too trivial or too difficult to fix maybe? Do I
need to provide more details?

Any help would be nice :)

On 18/02/2008, Ferdy Galema <f....@gmail.com> wrote:
>
> Every once and a while some of our Tasks fail (not the Tracker, just the
> Tasks), due to the JVM (jre1.6.0_04) crashing with exitcode 134. The
> logging reports that it outputted a crashdump:
>
> # An unexpected error has been detected by Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x00002aaaaaeb5290, pid=25119, tid=1081145664
> #
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (10.0-b19 mixed mode
> linux-amd64)
> # Problematic frame:
> # V  [libjvm.so+0x2f5290]
> #
> # An error report file with more information is saved as:
> #
> /kalooga/filesystem/mapreduce/local/taskTracker/jobcache/job_200802151637_0006/task_200802151637_0006_m_000005_0/hs_err_pid25119.log
> #
> # If you would like to submit a bug report, please visit:
> #  http://java.sun.com/webapps/bugreport/crash.jsp
> #
>
>
> The problem is that the dump does not exist at the specified location. My
> bet is that Hadoop starts a new Task immediately after the failed one, which
> results in the old jobcache to be deleted. Is it possible to keep this cache
> instead?
>

Re: JVM core dumps unavailable because of temporary folders

Posted by Ferdy Galema <f....@gmail.com>.
Bump

On 18/02/2008, Ferdy Galema <f....@gmail.com> wrote:
>
> Every once and a while some of our Tasks fail (not the Tracker, just the
> Tasks), due to the JVM (jre1.6.0_04) crashing with exitcode 134. The
> logging reports that it outputted a crashdump:
>
> # An unexpected error has been detected by Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x00002aaaaaeb5290, pid=25119, tid=1081145664
> #
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (10.0-b19 mixed mode
> linux-amd64)
> # Problematic frame:
> # V  [libjvm.so+0x2f5290]
> #
> # An error report file with more information is saved as:
> #
> /kalooga/filesystem/mapreduce/local/taskTracker/jobcache/job_200802151637_0006/task_200802151637_0006_m_000005_0/hs_err_pid25119.log
> #
> # If you would like to submit a bug report, please visit:
> #  http://java.sun.com/webapps/bugreport/crash.jsp
> #
>
>
> The problem is that the dump does not exist at the specified location. My
> bet is that Hadoop starts a new Task immediately after the failed one, which
> results in the old jobcache to be deleted. Is it possible to keep this cache
> instead?
>