You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by shan s <my...@gmail.com> on 2012/05/22 14:43:06 UTC

hdfs out of disk space.

My cluster went out of disk during a job execution and now shows occupied
disk space = 100%. When I attempt hadoop fs -ls input, I get below.

Java HotSpot(TM) 64-Bit Server VM warning: Insufficient space for shared
memory file:   /tmp/hsperfdata_user/32062
Try using the -Djava.io.tmpdir= option to select an alternate temp location.
Error: tmp.h sperfdata Could not find or load main class ___._user.32074
Is there an easy way to come out of this and be able to delete hdfs files,
to get it back to working.
Thanks, Prashant.

Re: hdfs out of disk space.

Posted by Harsh J <ha...@cloudera.com>.
Prashant,

You may be able to salvage some free temp-data space (if used up by
earlier maps or reduces) from shared mapred.local.dir directories by
restarting the TaskTracker. Also clear out unwanted items from /tmp
(which is what may be blocking your JVMs). Or if you have another free
disk, set HADOOP_OPTS in hadoop-env.sh as
"-Djava.io.tmpdir=/path/to/dir/on/free/disk" and make sure the chosen
path has permissions 777, and then once HDFS is up delete away some
unwanted files.

On Tue, May 22, 2012 at 6:13 PM, shan s <my...@gmail.com> wrote:
> My cluster went out of disk during a job execution and now shows occupied
> disk space = 100%. When I attempt hadoop fs -ls input, I get below.
>
> Java HotSpot(TM) 64-Bit Server VM warning: Insufficient space for shared
> memory file:   /tmp/hsperfdata_user/32062
> Try using the -Djava.io.tmpdir= option to select an alternate temp location.
> Error: tmp.h sperfdata Could not find or load main class ___._user.32074
> Is there an easy way to come out of this and be able to delete hdfs files,
> to get it back to working.
> Thanks, Prashant.



-- 
Harsh J