You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Christophe Taton <ta...@apache.org> on 2008/06/17 13:18:45 UTC
Task failing, cause FileSystem close?
Hi all,
I am experiencing (through my students) the following error on a 28
nodes cluster running Hadoop 0.16.4.
Some jobs fail with many map tasks aborting with this error message:
2008-06-17 12:25:01,512 WARN org.apache.hadoop.mapred.TaskTracker:
Error running child
java.io.IOException: Filesystem closed
at org.apache.hadoop.dfs.DFSClient.checkOpen(DFSClient.java:166)
at org.apache.hadoop.dfs.DFSClient.access$500(DFSClient.java:58)
at org.apache.hadoop.dfs.DFSClient$DFSInputStream.close(DFSClient.java:1103)
at java.io.FilterInputStream.close(FilterInputStream.java:155)
at org.apache.hadoop.io.SequenceFile$Reader.close(SequenceFile.java:1541)
at org.apache.hadoop.mapred.SequenceFileRecordReader.close(SequenceFileRecordReader.java:125)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.close(MapTask.java:155)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:212)
at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2084)
Any clue why this would happen?
Thanks in advance,
Christophe