You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Peter Thygesen <th...@infopaq.dk> on 2007/12/01 23:25:27 UTC

MadReduce error: "Could not find any valid local directory for tmp/client-...."

I can't get my job to run on my cluster. First I tested my code locally.
It worked. Then I tried it on a single cluster machine. It worked.

But when I ran it on my "micro 4 machine cluster" it did all the
mapping, but the reduce part failed.

 

Can anybody give me a hint, what have I done wrong or what am I missing?


 

Thx. Peter

 

hadoop@hadoopmaster:~/hadoop$ bin/hadoop jar recordcount.jar RecordCount
/xml/2006/12/11/495b8ee5e87487e8fcec571c78406778.xml /mapred/test/3
07/12/01 22:42:13 INFO mapred.FileInputFormat: Total input paths to
process : 1 
07/12/01 22:42:14 INFO mapred.JobClient: Running job:
job_200711301526_0008
07/12/01 22:42:15 INFO mapred.JobClient:  map 0% reduce 0%
07/12/01 22:42:28 INFO mapred.JobClient:  map 100% reduce 0%
07/12/01 22:42:37 INFO mapred.JobClient: Task Id :
task_200711301526_0008_r_000000_0, Status : FAILED
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
any valid local directory for tmp/client-4264252075834575900
at
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathF
orWrite (LocalDirAllocator.java:281)
at
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFile
ForWrite(LocalDirAllocator.java:294)
at
org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAll
ocator.java :155)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.newBackupFile(DFSClient.
java:1483)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.openBackupStream(DFSClie
nt.java:1450)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.writeChunk
(DFSClient.java:1592)
at
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.ja
va:140)
at
org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:122)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.close
(DFSClient.java:1728)
at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutput
Stream.java:49)
at
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:64
)
at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close
(TextOutputFormat.java:68)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:333)
at
org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1760) 

 


RE: MadReduce error: "Could not find any valid local directory for tmp/client-...."

Posted by Peter Thygesen <th...@infopaq.dk>.
I case other experiences the same problem, this is what I did.

Turns out that my name nodes used a different "hadoop.tmp.dir" setting
than my data-nodes.
Making sure they had the same "hadoop.tmp.dir" fixed my problem.

\Peter

-----Original Message-----
From: Peter Thygesen 
Sent: 1. december 2007 23:25
To: hadoop-user@lucene.apache.org
Subject: MadReduce error: "Could not find any valid local directory for
tmp/client-...."

I can't get my job to run on my cluster. First I tested my code locally.
It worked. Then I tried it on a single cluster machine. It worked.

But when I ran it on my "micro 4 machine cluster" it did all the
mapping, but the reduce part failed.

 

Can anybody give me a hint, what have I done wrong or what am I missing?


 

Thx. Peter

 

hadoop@hadoopmaster:~/hadoop$ bin/hadoop jar recordcount.jar RecordCount
/xml/2006/12/11/495b8ee5e87487e8fcec571c78406778.xml /mapred/test/3
07/12/01 22:42:13 INFO mapred.FileInputFormat: Total input paths to
process : 1 
07/12/01 22:42:14 INFO mapred.JobClient: Running job:
job_200711301526_0008
07/12/01 22:42:15 INFO mapred.JobClient:  map 0% reduce 0%
07/12/01 22:42:28 INFO mapred.JobClient:  map 100% reduce 0%
07/12/01 22:42:37 INFO mapred.JobClient: Task Id :
task_200711301526_0008_r_000000_0, Status : FAILED
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
any valid local directory for tmp/client-4264252075834575900
at
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathF
orWrite (LocalDirAllocator.java:281)
at
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFile
ForWrite(LocalDirAllocator.java:294)
at
org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAll
ocator.java :155)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.newBackupFile(DFSClient.
java:1483)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.openBackupStream(DFSClie
nt.java:1450)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.writeChunk
(DFSClient.java:1592)
at
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.ja
va:140)
at
org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:122)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.close
(DFSClient.java:1728)
at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutput
Stream.java:49)
at
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:64
)
at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close
(TextOutputFormat.java:68)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:333)
at
org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1760)