You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hama.apache.org by "Edward J. Yoon" <ed...@apache.org> on 2008/12/12 09:39:29 UTC

Re: java.lang.OutOfMemoryError: Java heap space

Yes, RowResult seems too large. But, I don't think "increasing the
child heap" is a good solution.

Let's see the bigTable paper.

Each row in the imagery table corresponds to a single
geographic segment. Rows are named to ensure that
adjacent geographic segments are stored near each other.
The table contains a column family to keep track of the
sources of data for each segment. This column family
has a large number of columns: essentially one for each
raw data image.

Yes, We can have a large number of columns in the one column-family.
In above case, I think ......

                       column:miles                          image: ...
=================================================
segment(x, y)   column:1 miles  <segment(x',y')>
                       column:2 miles  <segment(x^,y^)>
                       .......

Then, we can search something within a N-mile radius. Right?
Finally, ... I need another solution.

On Sat, Nov 29, 2008 at 12:35 AM, Thibaut_ <tb...@blue.lu> wrote:
>
> Your application uses too much memory. Try increasing the child heap space
> for mapreduce applications. (It's in the hadoop configuration file,
> mapred.child.java.opts)
>
> Thibaut
>
>
> Edward J. Yoon-2 wrote:
>>
>> While run mapred, I received below error. The size of RowResult seems
>> too large. What do you think?
>>
>> ----
>> 08/11/27 13:42:49 INFO mapred.JobClient: map 0% reduce 0%
>> 08/11/27 13:42:55 INFO mapred.JobClient: map 50% reduce 0%
>> 08/11/27 13:43:09 INFO mapred.JobClient: map 50% reduce 8%
>> 08/11/27 13:43:13 INFO mapred.JobClient: map 50% reduce 16%
>> 08/11/27 13:43:15 INFO mapred.JobClient: Task Id :
>> attempt_200811271320_0006_m_000000_0, Status : FAILED
>> java.lang.OutOfMemoryError: Java heap space
>>         at java.util.Arrays.copyOf(Arrays.java:2786)
>>         at
>> java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
>>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
>>         at
>> org.apache.hadoop.hbase.util.Bytes.writeByteArray(Bytes.java:65)
>>         at org.apache.hadoop.hbase.io.Cell.write(Cell.java:152)
>>         at
>> org.apache.hadoop.hbase.io.HbaseMapWritable.write(HbaseMapWritable.java:196)
>>         at org.apache.hadoop.hbase.io.RowResult.write(RowResult.java:245)
>>         at
>> org.apache.hadoop.hbase.util.Writables.getBytes(Writables.java:49)
>>         at
>> org.apache.hadoop.hbase.util.Writables.copyWritable(Writables.java:134)
>>
>> --
>> Best Regards, Edward J. Yoon @ NHN, corp.
>> edwardyoon@apache.org
>> http://blog.udanax.org
>>
>>
>
> --
> View this message in context: http://www.nabble.com/java.lang.OutOfMemoryError%3A-Java-heap-space-tp20714065p20736470.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardyoon@apache.org
http://blog.udanax.org

Re: java.lang.OutOfMemoryError: Java heap space

Posted by stack <st...@duboce.net>.
Edward J. Yoon wrote:
> If I create scanners to avoid this problem on each map, hbase gives a
> lot of UnknowScannerException.
>   

USE are how the time-out of the client-server dance around scanning is 
manifest.  Either the client took too long to check back in again -- 
stuck GC'ing in the task-hosting client as may be your case below -- and 
its scanner lease expired on the server or the server is swamped perhaps 
by adjacent tasktracker or some other process and is not getting around 
to processing the client in time.

St.Ack

Re: java.lang.OutOfMemoryError: Java heap space

Posted by "Edward J. Yoon" <ed...@apache.org>.
If I create scanners to avoid this problem on each map, hbase gives a
lot of UnknowScannerException.

On Fri, Dec 12, 2008 at 5:39 PM, Edward J. Yoon <ed...@apache.org> wrote:
> Yes, RowResult seems too large. But, I don't think "increasing the
> child heap" is a good solution.
>
> Let's see the bigTable paper.
>
> Each row in the imagery table corresponds to a single
> geographic segment. Rows are named to ensure that
> adjacent geographic segments are stored near each other.
> The table contains a column family to keep track of the
> sources of data for each segment. This column family
> has a large number of columns: essentially one for each
> raw data image.
>
> Yes, We can have a large number of columns in the one column-family.
> In above case, I think ......
>
>                       column:miles                          image: ...
> =================================================
> segment(x, y)   column:1 miles  <segment(x',y')>
>                       column:2 miles  <segment(x^,y^)>
>                       .......
>
> Then, we can search something within a N-mile radius. Right?
> Finally, ... I need another solution.
>
> On Sat, Nov 29, 2008 at 12:35 AM, Thibaut_ <tb...@blue.lu> wrote:
>>
>> Your application uses too much memory. Try increasing the child heap space
>> for mapreduce applications. (It's in the hadoop configuration file,
>> mapred.child.java.opts)
>>
>> Thibaut
>>
>>
>> Edward J. Yoon-2 wrote:
>>>
>>> While run mapred, I received below error. The size of RowResult seems
>>> too large. What do you think?
>>>
>>> ----
>>> 08/11/27 13:42:49 INFO mapred.JobClient: map 0% reduce 0%
>>> 08/11/27 13:42:55 INFO mapred.JobClient: map 50% reduce 0%
>>> 08/11/27 13:43:09 INFO mapred.JobClient: map 50% reduce 8%
>>> 08/11/27 13:43:13 INFO mapred.JobClient: map 50% reduce 16%
>>> 08/11/27 13:43:15 INFO mapred.JobClient: Task Id :
>>> attempt_200811271320_0006_m_000000_0, Status : FAILED
>>> java.lang.OutOfMemoryError: Java heap space
>>>         at java.util.Arrays.copyOf(Arrays.java:2786)
>>>         at
>>> java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
>>>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
>>>         at
>>> org.apache.hadoop.hbase.util.Bytes.writeByteArray(Bytes.java:65)
>>>         at org.apache.hadoop.hbase.io.Cell.write(Cell.java:152)
>>>         at
>>> org.apache.hadoop.hbase.io.HbaseMapWritable.write(HbaseMapWritable.java:196)
>>>         at org.apache.hadoop.hbase.io.RowResult.write(RowResult.java:245)
>>>         at
>>> org.apache.hadoop.hbase.util.Writables.getBytes(Writables.java:49)
>>>         at
>>> org.apache.hadoop.hbase.util.Writables.copyWritable(Writables.java:134)
>>>
>>> --
>>> Best Regards, Edward J. Yoon @ NHN, corp.
>>> edwardyoon@apache.org
>>> http://blog.udanax.org
>>>
>>>
>>
>> --
>> View this message in context: http://www.nabble.com/java.lang.OutOfMemoryError%3A-Java-heap-space-tp20714065p20736470.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
>
>
>
> --
> Best Regards, Edward J. Yoon @ NHN, corp.
> edwardyoon@apache.org
> http://blog.udanax.org
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardyoon@apache.org
http://blog.udanax.org

Re: java.lang.OutOfMemoryError: Java heap space

Posted by "Edward J. Yoon" <ed...@apache.org>.
If I create scanners to avoid this problem on each map, hbase gives a
lot of UnknowScannerException.

On Fri, Dec 12, 2008 at 5:39 PM, Edward J. Yoon <ed...@apache.org> wrote:
> Yes, RowResult seems too large. But, I don't think "increasing the
> child heap" is a good solution.
>
> Let's see the bigTable paper.
>
> Each row in the imagery table corresponds to a single
> geographic segment. Rows are named to ensure that
> adjacent geographic segments are stored near each other.
> The table contains a column family to keep track of the
> sources of data for each segment. This column family
> has a large number of columns: essentially one for each
> raw data image.
>
> Yes, We can have a large number of columns in the one column-family.
> In above case, I think ......
>
>                       column:miles                          image: ...
> =================================================
> segment(x, y)   column:1 miles  <segment(x',y')>
>                       column:2 miles  <segment(x^,y^)>
>                       .......
>
> Then, we can search something within a N-mile radius. Right?
> Finally, ... I need another solution.
>
> On Sat, Nov 29, 2008 at 12:35 AM, Thibaut_ <tb...@blue.lu> wrote:
>>
>> Your application uses too much memory. Try increasing the child heap space
>> for mapreduce applications. (It's in the hadoop configuration file,
>> mapred.child.java.opts)
>>
>> Thibaut
>>
>>
>> Edward J. Yoon-2 wrote:
>>>
>>> While run mapred, I received below error. The size of RowResult seems
>>> too large. What do you think?
>>>
>>> ----
>>> 08/11/27 13:42:49 INFO mapred.JobClient: map 0% reduce 0%
>>> 08/11/27 13:42:55 INFO mapred.JobClient: map 50% reduce 0%
>>> 08/11/27 13:43:09 INFO mapred.JobClient: map 50% reduce 8%
>>> 08/11/27 13:43:13 INFO mapred.JobClient: map 50% reduce 16%
>>> 08/11/27 13:43:15 INFO mapred.JobClient: Task Id :
>>> attempt_200811271320_0006_m_000000_0, Status : FAILED
>>> java.lang.OutOfMemoryError: Java heap space
>>>         at java.util.Arrays.copyOf(Arrays.java:2786)
>>>         at
>>> java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
>>>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
>>>         at
>>> org.apache.hadoop.hbase.util.Bytes.writeByteArray(Bytes.java:65)
>>>         at org.apache.hadoop.hbase.io.Cell.write(Cell.java:152)
>>>         at
>>> org.apache.hadoop.hbase.io.HbaseMapWritable.write(HbaseMapWritable.java:196)
>>>         at org.apache.hadoop.hbase.io.RowResult.write(RowResult.java:245)
>>>         at
>>> org.apache.hadoop.hbase.util.Writables.getBytes(Writables.java:49)
>>>         at
>>> org.apache.hadoop.hbase.util.Writables.copyWritable(Writables.java:134)
>>>
>>> --
>>> Best Regards, Edward J. Yoon @ NHN, corp.
>>> edwardyoon@apache.org
>>> http://blog.udanax.org
>>>
>>>
>>
>> --
>> View this message in context: http://www.nabble.com/java.lang.OutOfMemoryError%3A-Java-heap-space-tp20714065p20736470.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
>
>
>
> --
> Best Regards, Edward J. Yoon @ NHN, corp.
> edwardyoon@apache.org
> http://blog.udanax.org
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardyoon@apache.org
http://blog.udanax.org