You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hama.apache.org by "Edward J. Yoon" <ed...@apache.org> on 2008/12/11 03:56:47 UTC
RetriesExhaustedException during HTable.commit
I am receiving the RetriesExhaustedException exception during
HTable.commit, the size of cell is 50 mb (2,500 * 2,500 double
entries).
Is there a configuration to avoid this problem?
Cluster : 4 node, 16 cores (Intel(R) Xeon(R) CPU 2.33GHz, SATA hard
disk, Physical Memory 16 GB)
Thanks.
----
08/12/11 11:40:58 INFO mapred.JobClient: map 100% reduce 76%
08/12/11 11:41:02 INFO mapred.JobClient: map 100% reduce 80%
08/12/11 11:42:07 INFO mapred.JobClient: Task Id :
attempt_200812100956_0044_r_000007_1, Status : FAILED
org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
contact region server 61.247.201.164:60020 for region
DenseMatrix_randmmnwo,,1228961537371, row '1', but failed after 10
attempts.
Exceptions:
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:863)
at org.apache.hadoop.hbase.client.HTable.commit(HTable.java:964)
at org.apache.hadoop.hbase.client.HTable.commit(HTable.java:950)
at org.apache.hama.DenseMatrix.setBlock(DenseMatrix.java:496)
at org.apache.hama.mapred.BlockingMapRed$BlockingReducer.reduce(BlockingMapRed.java:150)
at org.apache.hama.mapred.BlockingMapRed$BlockingReducer.reduce(BlockingMapRed.java:122)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:318)
at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2207)
--
Best Regards, Edward J. Yoon @ NHN, corp.
edwardyoon@apache.org
http://blog.udanax.org
Re: RetriesExhaustedException during HTable.commit
Posted by "Edward J. Yoon" <ed...@apache.org>.
Oh, Many thanks. I hope it'll be fixed at the 0.18.x release.
On Fri, Dec 12, 2008 at 3:34 PM, Michael Stack <st...@duboce.net> wrote:
> Edward J. Yoon wrote:
>>
>> I am receiving the RetriesExhaustedException exception during
>> HTable.commit, the size of cell is 50 mb (2,500 * 2,500 double
>> entries).
>> Is there a configuration to avoid this problem?
>>
>
> Looks like HADOOP-4802 (in hbase, its part of HBASE-900). Big cells can
> trigger OOME. Update if you are running TRUNK or if you need to stay on
> hadoop 0.18.0, we can backport the patch for you. Just say.
>
> St.Ack
>
--
Best Regards, Edward J. Yoon @ NHN, corp.
edwardyoon@apache.org
http://blog.udanax.org
Re: RetriesExhaustedException during HTable.commit
Posted by Michael Stack <st...@duboce.net>.
Edward J. Yoon wrote:
> I am receiving the RetriesExhaustedException exception during
> HTable.commit, the size of cell is 50 mb (2,500 * 2,500 double
> entries).
> Is there a configuration to avoid this problem?
>
Looks like HADOOP-4802 (in hbase, its part of HBASE-900). Big cells can
trigger OOME. Update if you are running TRUNK or if you need to stay on
hadoop 0.18.0, we can backport the patch for you. Just say.
St.Ack