You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Sandeep Dhawan <ds...@hcl.in> on 2009/01/19 19:04:03 UTC

Hadoop Exceptions

Here are few hadoop exceptions that I am getting while running mapred job on
700MB of data on a 3 node cluster on Windows platform (using cygwin):

1. 2009-01-08 17:54:10,597 INFO org.apache.hadoop.dfs.DataNode: writeBlock
blk_-4309088198093040326_1001 received exception java.io.IOException: Block
blk_-4309088198093040326_1001 is valid, and cannot be written to.
2009-01-08 17:54:10,597 ERROR org.apache.hadoop.dfs.DataNode:
DatanodeRegistration(10.120.12.91:50010,
storageID=DS-70805886-10.120.12.91-50010-1231381442699, infoPort=50075,
ipcPort=50020):DataXceiver: java.io.IOException: Block
blk_-4309088198093040326_1001 is valid, and cannot be written to.
	at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:921)
	at org.apache.hadoop.dfs.DataNode$BlockReceiver.<init>(DataNode.java:2364)
	at
org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1218)
	at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:1076)
	at java.lang.Thread.run(Thread.java:619)

2. This particular job succeeded. Is it possible that this task was a
speculative execution and was killed before it could be started.
Exception in thread "main" java.lang.NullPointerException
	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2195)

3. 2009-01-15 21:27:13,547 WARN org.apache.hadoop.mapred.ReduceTask:
attempt_200901152118_0001_r_000000_0 Merge of the inmemory files threw an
exception: java.io.IOException: Expecting a line not the end of stream
	at org.apache.hadoop.fs.DF.parseExecResult(DF.java:109)
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:179)
	at org.apache.hadoop.util.Shell.run(Shell.java:134)
	at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
	at
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:296)
	at
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
	at
org.apache.hadoop.mapred.MapOutputFile.getInputFileForWrite(MapOutputFile.java:160)
	at
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.doInMemMerge(ReduceTask.java:2105)
	at
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$InMemFSMergeThread.run(ReduceTask.java:2078)

4. 2009-01-15 21:27:13,547 INFO org.apache.hadoop.mapred.ReduceTask:
In-memory merge complete: 47 files left.
2009-01-15 21:27:13,579 WARN org.apache.hadoop.mapred.TaskTracker: Error
running child
java.io.IOException: attempt_200901152118_0001_r_000000_0The reduce copier
failed
	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:255)
	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2207)

5. Caused by: java.io.IOException: An established connection was aborted by
the software in your host machine
	... 12 more

Can anyone help me in giving some pointers to what could be the issue. 

Thanks,
Sandeep


-- 
View this message in context: http://www.nabble.com/Hadoop-Exceptions-tp21548261p21548261.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.