You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Chih-Hsien Wu <ch...@gmail.com> on 2013/10/23 21:20:08 UTC

Hadoop 1.2.1 corrupt after restart from out of heap memory exception

I uploaded data into distributed file system. Cluster summary shows there
is enough heap size memory. However, whenever I try run Mahout 0.8 command.
The system displays out of heap memory exception. I shutdown hadoop cluster
and allocated more memory to mapred.child.java.opts. I then restarted the
hadoop cluster and the namenode is corrupted. Any help is appreciated.