You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Yair Even-Zohar <ya...@audiencescience.com> on 2009/08/11 15:10:42 UTC

problem setting mapred.child.java.opts

I'm running a mapreduce using Hbase table as input with some distributed
cache file and all works well.

However, when I set:

c.set("mapred.child.java.opts", "-Xmx512m")     in the java code and
using the exact same input and exact same distributed cache I'm getting
the following:

 

on the master side:

 

09/08/11 08:19:05 INFO mapred.JobClient: Task Id :
attempt_200908110722_0016_m_000001_0, Status : FAILED

java.io.IOException: Task process exit with nonzero status of 134.

        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)

 

09/08/11 08:19:05 WARN mapred.JobClient: Error reading task
outputhttp://domU-12-31-39-02-95-D2.compute-1.internal:50060/tasklog?pla
intext=true&taskid=attempt_200908110722_0016_m_000001_0&filter=stdout

09/08/11 08:19:05 WARN mapred.JobClient: Error reading task
outputhttp://domU-12-31-39-02-95-D2.compute-1.internal:50060/tasklog?pla
intext=true&taskid=attempt_200908110722_0016_m_000001_0&filter=stderr

 

 

 

 

On the slave log I'm getting as if I do not have enough space. But I run
these on EC2 with 4 GB or memory so that's insane. 

 

Error occurred during initialization of VM

Could not reserve enough space for object heap

#

# An unexpected error has been detected by Java Runtime Environment:

#

#  SIGSEGV (0xb) at pc=0xf7ec40da, pid=4948, tid=4158831504

#

# Java VM: Java HotSpot(TM) Server VM (10.0-b23 mixed mode linux-x86)

# Problematic frame:

# C  [libc.so.6+0x6d0da]  cfree+0x7a

#

# An error report file with more information is saved as:

#
/mnt/hadoop/mapred/local/taskTracker/jobcache/job_200908110722_0016/atte
mpt_200908110722_0016_m_000001_0/work/hs_err_pid4948.log

#

# If you would like to submit a bug report, please visit:

#   http://java.sun.com/webapps/bugreport/crash.jsp

#

 

 

Furthermore, I have checked the job_XXXXXXXXX_YYYY_conf.xml and the only
difference between the two runs (successful and non-successful ) is the
size 

<
<property><name>mapred.child.java.opts</name><value>-Xmx512m</value></pr
operty>

>
<property><name>mapred.child.java.opts</name><value>-Xmx200m</value></pr
operty>

 

 

Any idea regarding this proble is welcome

 

Thanks

-Yair

 


RE: problem setting mapred.child.java.opts

Posted by Yair Even-Zohar <ya...@audiencescience.com>.
Sorry to bug you guys again but I found the problem.
An old hadoop-site that was in the class path and had limit on the
"mapred.child.ulimit" to 500000

Thanks
-Yair

-----Original Message-----
From: Yair Even-Zohar [mailto:yaire@audiencescience.com] 
Sent: Tuesday, August 11, 2009 4:11 PM
To: common-user@hadoop.apache.org; hbase-user@hadoop.apache.org
Subject: problem setting mapred.child.java.opts 

I'm running a mapreduce using Hbase table as input with some distributed
cache file and all works well.

However, when I set:

c.set("mapred.child.java.opts", "-Xmx512m")     in the java code and
using the exact same input and exact same distributed cache I'm getting
the following:

 

on the master side:

 

09/08/11 08:19:05 INFO mapred.JobClient: Task Id :
attempt_200908110722_0016_m_000001_0, Status : FAILED

java.io.IOException: Task process exit with nonzero status of 134.

        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)

 

09/08/11 08:19:05 WARN mapred.JobClient: Error reading task
outputhttp://domU-12-31-39-02-95-D2.compute-1.internal:50060/tasklog?pla
intext=true&taskid=attempt_200908110722_0016_m_000001_0&filter=stdout

09/08/11 08:19:05 WARN mapred.JobClient: Error reading task
outputhttp://domU-12-31-39-02-95-D2.compute-1.internal:50060/tasklog?pla
intext=true&taskid=attempt_200908110722_0016_m_000001_0&filter=stderr

 

 

 

 

On the slave log I'm getting as if I do not have enough space. But I run
these on EC2 with 4 GB or memory so that's insane. 

 

Error occurred during initialization of VM

Could not reserve enough space for object heap

#

# An unexpected error has been detected by Java Runtime Environment:

#

#  SIGSEGV (0xb) at pc=0xf7ec40da, pid=4948, tid=4158831504

#

# Java VM: Java HotSpot(TM) Server VM (10.0-b23 mixed mode linux-x86)

# Problematic frame:

# C  [libc.so.6+0x6d0da]  cfree+0x7a

#

# An error report file with more information is saved as:

#
/mnt/hadoop/mapred/local/taskTracker/jobcache/job_200908110722_0016/atte
mpt_200908110722_0016_m_000001_0/work/hs_err_pid4948.log

#

# If you would like to submit a bug report, please visit:

#   http://java.sun.com/webapps/bugreport/crash.jsp

#

 

 

Furthermore, I have checked the job_XXXXXXXXX_YYYY_conf.xml and the only
difference between the two runs (successful and non-successful ) is the
size 

<
<property><name>mapred.child.java.opts</name><value>-Xmx512m</value></pr
operty>

>
<property><name>mapred.child.java.opts</name><value>-Xmx200m</value></pr
operty>

 

 

Any idea regarding this proble is welcome

 

Thanks

-Yair

 


RE: problem setting mapred.child.java.opts

Posted by Yair Even-Zohar <ya...@audiencescience.com>.
Sorry to bug you guys again but I found the problem.
An old hadoop-site that was in the class path and had limit on the
"mapred.child.ulimit" to 500000

Thanks
-Yair

-----Original Message-----
From: Yair Even-Zohar [mailto:yaire@audiencescience.com] 
Sent: Tuesday, August 11, 2009 4:11 PM
To: common-user@hadoop.apache.org; hbase-user@hadoop.apache.org
Subject: problem setting mapred.child.java.opts 

I'm running a mapreduce using Hbase table as input with some distributed
cache file and all works well.

However, when I set:

c.set("mapred.child.java.opts", "-Xmx512m")     in the java code and
using the exact same input and exact same distributed cache I'm getting
the following:

 

on the master side:

 

09/08/11 08:19:05 INFO mapred.JobClient: Task Id :
attempt_200908110722_0016_m_000001_0, Status : FAILED

java.io.IOException: Task process exit with nonzero status of 134.

        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)

 

09/08/11 08:19:05 WARN mapred.JobClient: Error reading task
outputhttp://domU-12-31-39-02-95-D2.compute-1.internal:50060/tasklog?pla
intext=true&taskid=attempt_200908110722_0016_m_000001_0&filter=stdout

09/08/11 08:19:05 WARN mapred.JobClient: Error reading task
outputhttp://domU-12-31-39-02-95-D2.compute-1.internal:50060/tasklog?pla
intext=true&taskid=attempt_200908110722_0016_m_000001_0&filter=stderr

 

 

 

 

On the slave log I'm getting as if I do not have enough space. But I run
these on EC2 with 4 GB or memory so that's insane. 

 

Error occurred during initialization of VM

Could not reserve enough space for object heap

#

# An unexpected error has been detected by Java Runtime Environment:

#

#  SIGSEGV (0xb) at pc=0xf7ec40da, pid=4948, tid=4158831504

#

# Java VM: Java HotSpot(TM) Server VM (10.0-b23 mixed mode linux-x86)

# Problematic frame:

# C  [libc.so.6+0x6d0da]  cfree+0x7a

#

# An error report file with more information is saved as:

#
/mnt/hadoop/mapred/local/taskTracker/jobcache/job_200908110722_0016/atte
mpt_200908110722_0016_m_000001_0/work/hs_err_pid4948.log

#

# If you would like to submit a bug report, please visit:

#   http://java.sun.com/webapps/bugreport/crash.jsp

#

 

 

Furthermore, I have checked the job_XXXXXXXXX_YYYY_conf.xml and the only
difference between the two runs (successful and non-successful ) is the
size 

<
<property><name>mapred.child.java.opts</name><value>-Xmx512m</value></pr
operty>

>
<property><name>mapred.child.java.opts</name><value>-Xmx200m</value></pr
operty>

 

 

Any idea regarding this proble is welcome

 

Thanks

-Yair