You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Jason Huang <ja...@icare.com> on 2012/09/14 23:50:33 UTC

HDFS Error - BlockReader: error in packet header

Hello,

Looking for some help in setting up hadoop 1.0.3 in Pseudo distributed mode...

I was able to install hadoop, config the .xml files and start all nodes:
$ JPS
6645 Jps
6030 SecondaryNameNode
6185 TaskTracker
5851 NameNode
6095 JobTracker
5939 DataNode

However, when I tried to play around with a couple of Map-reduce jobs
with provided example jar files I got the following errors:

(1) $ bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
Number of Maps  = 10
Samples per Map = 100
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
12/09/14 17:39:06 INFO mapred.FileInputFormat: Total input paths to process : 10
12/09/14 17:39:06 INFO mapred.JobClient: Running job: job_201209141701_0003
12/09/14 17:39:07 INFO mapred.JobClient:  map 0% reduce 0%
12/09/14 17:39:16 INFO mapred.JobClient: Task Id :
attempt_201209141701_0003_m_000011_0, Status : FAILED
Error initializing attempt_201209141701_0003_m_000011_0:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)

(2) $ ./bin/hadoop jar hadoop-examples-1.0.3.jar wordcount
/user/jasonhuang/input /user/jasonhuang/output
12/09/14 17:37:51 INFO input.FileInputFormat: Total input paths to process : 1
12/09/14 17:37:51 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable
12/09/14 17:37:51 WARN snappy.LoadSnappy: Snappy native library not loaded
12/09/14 17:37:57 INFO mapred.JobClient: Cleaning up the staging area
hdfs://localhost:9000/tmp/hadoop-jasonhuang/mapred/staging/jasonhuang/.staging/job_201209141701_0002
12/09/14 17:37:57 ERROR security.UserGroupInformation:
PriviledgedActionException as:jasonhuang
cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
19968, dataLen : 1835351087, seqno : 7023413562532324724 (last: 0))

Does anyone have idea on why the error occurs and how I can fix them?

thanks!

Jason

Re: HDFS Error - BlockReader: error in packet header

Posted by Jason Huang <ja...@icare.com>.
I tried to reinstall hadoop and remove all the previous hdfs
directories and reformat the name node.

After that it appears to be working now.

thanks for looking at this!

Jason

On Sat, Sep 15, 2012 at 2:37 PM, Jason Huang <ja...@icare.com> wrote:
> Thanks Harsh.
>
> I've tried the following again:
> $ ./bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
>
> And I got the same error (sorry for having to paste this longggg log):
> Number of Maps  = 10
> Samples per Map = 100
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Starting Job
> 12/09/15 14:20:02 INFO mapred.FileInputFormat: Total input paths to process : 10
> 12/09/15 14:20:02 INFO mapred.JobClient: Running job: job_201209151409_0001
> 12/09/15 14:20:03 INFO mapred.JobClient:  map 0% reduce 0%
> 12/09/15 14:20:14 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000011_0, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000011_0:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stdout
> 12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stderr
> 12/09/15 14:20:23 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000011_1, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000011_1:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stdout
> 12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stderr
> 12/09/15 14:20:32 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000011_2, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000011_2:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stdout
> 12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stderr
> 12/09/15 14:20:50 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000010_0, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000010_0:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stdout
> 12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stderr
> 12/09/15 14:20:59 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000010_1, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000010_1:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stdout
> 12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stderr
> 12/09/15 14:21:08 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000010_2, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000010_2:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stdout
> 12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stderr
> 12/09/15 14:21:17 INFO mapred.JobClient: Job complete: job_201209151409_0001
> 12/09/15 14:21:17 INFO mapred.JobClient: Counters: 4
> 12/09/15 14:21:17 INFO mapred.JobClient:   Job Counters
> 12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=0
> 12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
> maps waiting after reserving slots (ms)=0
> 12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> 12/09/15 14:21:17 INFO mapred.JobClient: Job Failed: JobCleanup Task
> Failure, Task: task_201209151409_0001_m_000010
> java.io.IOException: Job failed!
>         at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
>         at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:297)
>         at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>         at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>         at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>         at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
>
> Here's the a 'brief' version of the data node log (but still very long...):
> 2012-09-15 14:20:02,640 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_39584416619615086_1077 src: /127.0.0.1:49829 dest:
> /127.0.0.1:50010 2012-09-15 14:20:02,642 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49829, dest: /127.0.0.1:50010, bytes: 20494, op:
> HDFS_WRITE, cliID: DFSClient_1015299679, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 565000 2012-09-15 14:20:02,642
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
> 0 for block blk_39584416619615086_1077 terminating 2012-09-15
> 14:20:02,663 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49831, bytes: 20658, op: HDFS_READ,
> cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 270000 2012-09-15 14:20:02,665
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49832, bytes: 18, op: HDFS_READ,
> cliID: DFSClient_672168163, offset: 20480, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 146000 2012-09-15 14:20:02,757
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_-6641115981657984283_1079 src: /127.0.0.1:49833 dest:
> /127.0.0.1:50010 2012-09-15 14:20:02,761 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49833, dest: /127.0.0.1:50010, bytes: 20563, op:
> HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6641115981657984283_1079, duration: 2189000 2012-09-15
> 14:20:02,761 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> PacketResponder 0 for block blk_-6641115981657984283_1079 terminating
> 2012-09-15 14:20:02,776 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_-6285781475981276067_1080 src: /127.0.0.1:49835 dest:
> /127.0.0.1:50010 2012-09-15 14:20:02,777 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49835, dest: /127.0.0.1:50010, bytes: 106, op: HDFS_WRITE,
> cliID: DFSClient_1436299339, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6285781475981276067_1080, duration: 321000 2012-09-15
> 14:20:02,777 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> PacketResponder 0 for block blk_-6285781475981276067_1080 terminating
> 2012-09-15 14:20:02,781 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49836, bytes: 201, op: HDFS_READ,
> cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6638660277808139598_1076, duration: 152000 2012-09-15
> 14:20:05,555 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49838, bytes: 110, op: HDFS_READ,
> cliID: DFSClient_1214970016, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6285781475981276067_1080, duration: 158000 2012-09-15
> 14:20:05,563 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49840, bytes: 20658, op: HDFS_READ,
> cliID: DFSClient_1762809953, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 302000  ................
> ...............
> 2012-09-15 14:21:08,667 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49944, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 753000 2012-09-15 14:21:11,671
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49947, bytes: 3096, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 372000 2012-09-15 14:21:11,672
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49948, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 353000 2012-09-15 14:21:11,673
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49949, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 355000 2012-09-15 14:21:14,677
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49951, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 368000 2012-09-15 14:21:17,628
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_5156635908233241378_1080 src: /127.0.0.1:49953 dest:
> /127.0.0.1:50010 2012-09-15 14:21:17,630 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49953, dest: /127.0.0.1:50010, bytes: 28184, op:
> HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_5156635908233241378_1080, duration: 867000 2012-09-15 14:21:17,630
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
> 0 for block blk_5156635908233241378_1080 terminating 2012-09-15
> 14:21:21,128 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_-8793872286240925170_1064 file
> /Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-6641115981657984283_1079 file
> /Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-8793872286240925170_1064 at file
> /Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170
> 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-6285781475981276067_1080 file
> /Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-6282803597350612472_1068 file
> /Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-6641115981657984283_1079 at file
> /Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-1973500835155733464_1071 file
> /Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-6285781475981276067_1080 at file
> /Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-201819473056990539_1072 file
> /Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-6282803597350612472_1068 at file
> /Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_966543399440919118_1073 file
> /Users/jasonhuang/hdfs/data/current/blk_966543399440919118 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-1973500835155733464_1071 at file
> /Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_1230157759905402594_1069 file
> /Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-201819473056990539_1072 at file
> /Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 2012-09-15
> 14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_3059764143082927316_1070 file
> /Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_966543399440919118_1073 at file
> /Users/jasonhuang/hdfs/data/current/blk_966543399440919118 2012-09-15
> 14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_4471127410063335353_1066 file
> /Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_1230157759905402594_1069 at file
> /Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 2012-09-15
> 14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_5156635908233241378_1080 file
> /Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_3059764143082927316_1070 at file
> /Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_7335749996441800570_1065 file
> /Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 for
> deletion 2012-09-15 14:21:21,130 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_4471127410063335353_1066 at file
> /Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_7674314220695151815_1067 file
> /Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 for
> deletion 2012-09-15 14:21:21,130 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_5156635908233241378_1080 at file
> /Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Deleted block blk_7335749996441800570_1065 at file
> /Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Deleted block blk_7674314220695151815_1067 at file
> /Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 2012-09-15
> 14:29:48,016 INFO
> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
> succeeded for blk_5700986404331589806_1038
>
> Not sure where I should go next. Hope to get some help.
>
> I didn't  weak any checksum size related configs in my config files -
> actually I am not even aware of the ability to do in config files.
>
> thanks!
>
> Jason
>
>
> On Fri, Sep 14, 2012 at 10:05 PM, Harsh J <ha...@cloudera.com> wrote:
>> Hi Jason,
>>
>> Does the DN log have something in it that corresponds to these errors?
>> Is there also some stacktrace/further text after the line you've
>> pasted until? Can we have it?
>>
>> Also, did you tweak any checksum size related configs in your config files?
>>
>> On Sat, Sep 15, 2012 at 3:20 AM, Jason Huang <ja...@icare.com> wrote:
>>> Hello,
>>>
>>> Looking for some help in setting up hadoop 1.0.3 in Pseudo distributed mode...
>>>
>>> I was able to install hadoop, config the .xml files and start all nodes:
>>> $ JPS
>>> 6645 Jps
>>> 6030 SecondaryNameNode
>>> 6185 TaskTracker
>>> 5851 NameNode
>>> 6095 JobTracker
>>> 5939 DataNode
>>>
>>> However, when I tried to play around with a couple of Map-reduce jobs
>>> with provided example jar files I got the following errors:
>>>
>>> (1) $ bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
>>> Number of Maps  = 10
>>> Samples per Map = 100
>>> Wrote input for Map #0
>>> Wrote input for Map #1
>>> Wrote input for Map #2
>>> Wrote input for Map #3
>>> Wrote input for Map #4
>>> Wrote input for Map #5
>>> Wrote input for Map #6
>>> Wrote input for Map #7
>>> Wrote input for Map #8
>>> Wrote input for Map #9
>>> Starting Job
>>> 12/09/14 17:39:06 INFO mapred.FileInputFormat: Total input paths to process : 10
>>> 12/09/14 17:39:06 INFO mapred.JobClient: Running job: job_201209141701_0003
>>> 12/09/14 17:39:07 INFO mapred.JobClient:  map 0% reduce 0%
>>> 12/09/14 17:39:16 INFO mapred.JobClient: Task Id :
>>> attempt_201209141701_0003_m_000011_0, Status : FAILED
>>> Error initializing attempt_201209141701_0003_m_000011_0:
>>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>>> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>>>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>>>
>>> (2) $ ./bin/hadoop jar hadoop-examples-1.0.3.jar wordcount
>>> /user/jasonhuang/input /user/jasonhuang/output
>>> 12/09/14 17:37:51 INFO input.FileInputFormat: Total input paths to process : 1
>>> 12/09/14 17:37:51 WARN util.NativeCodeLoader: Unable to load
>>> native-hadoop library for your platform... using builtin-java classes
>>> where applicable
>>> 12/09/14 17:37:51 WARN snappy.LoadSnappy: Snappy native library not loaded
>>> 12/09/14 17:37:57 INFO mapred.JobClient: Cleaning up the staging area
>>> hdfs://localhost:9000/tmp/hadoop-jasonhuang/mapred/staging/jasonhuang/.staging/job_201209141701_0002
>>> 12/09/14 17:37:57 ERROR security.UserGroupInformation:
>>> PriviledgedActionException as:jasonhuang
>>> cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>>> 19968, dataLen : 1835351087, seqno : 7023413562532324724 (last: 0))
>>>
>>> Does anyone have idea on why the error occurs and how I can fix them?
>>>
>>> thanks!
>>>
>>> Jason
>>
>>
>>
>> --
>> Harsh J

Re: HDFS Error - BlockReader: error in packet header

Posted by Jason Huang <ja...@icare.com>.
I tried to reinstall hadoop and remove all the previous hdfs
directories and reformat the name node.

After that it appears to be working now.

thanks for looking at this!

Jason

On Sat, Sep 15, 2012 at 2:37 PM, Jason Huang <ja...@icare.com> wrote:
> Thanks Harsh.
>
> I've tried the following again:
> $ ./bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
>
> And I got the same error (sorry for having to paste this longggg log):
> Number of Maps  = 10
> Samples per Map = 100
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Starting Job
> 12/09/15 14:20:02 INFO mapred.FileInputFormat: Total input paths to process : 10
> 12/09/15 14:20:02 INFO mapred.JobClient: Running job: job_201209151409_0001
> 12/09/15 14:20:03 INFO mapred.JobClient:  map 0% reduce 0%
> 12/09/15 14:20:14 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000011_0, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000011_0:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stdout
> 12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stderr
> 12/09/15 14:20:23 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000011_1, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000011_1:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stdout
> 12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stderr
> 12/09/15 14:20:32 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000011_2, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000011_2:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stdout
> 12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stderr
> 12/09/15 14:20:50 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000010_0, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000010_0:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stdout
> 12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stderr
> 12/09/15 14:20:59 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000010_1, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000010_1:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stdout
> 12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stderr
> 12/09/15 14:21:08 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000010_2, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000010_2:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stdout
> 12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stderr
> 12/09/15 14:21:17 INFO mapred.JobClient: Job complete: job_201209151409_0001
> 12/09/15 14:21:17 INFO mapred.JobClient: Counters: 4
> 12/09/15 14:21:17 INFO mapred.JobClient:   Job Counters
> 12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=0
> 12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
> maps waiting after reserving slots (ms)=0
> 12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> 12/09/15 14:21:17 INFO mapred.JobClient: Job Failed: JobCleanup Task
> Failure, Task: task_201209151409_0001_m_000010
> java.io.IOException: Job failed!
>         at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
>         at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:297)
>         at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>         at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>         at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>         at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
>
> Here's the a 'brief' version of the data node log (but still very long...):
> 2012-09-15 14:20:02,640 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_39584416619615086_1077 src: /127.0.0.1:49829 dest:
> /127.0.0.1:50010 2012-09-15 14:20:02,642 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49829, dest: /127.0.0.1:50010, bytes: 20494, op:
> HDFS_WRITE, cliID: DFSClient_1015299679, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 565000 2012-09-15 14:20:02,642
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
> 0 for block blk_39584416619615086_1077 terminating 2012-09-15
> 14:20:02,663 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49831, bytes: 20658, op: HDFS_READ,
> cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 270000 2012-09-15 14:20:02,665
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49832, bytes: 18, op: HDFS_READ,
> cliID: DFSClient_672168163, offset: 20480, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 146000 2012-09-15 14:20:02,757
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_-6641115981657984283_1079 src: /127.0.0.1:49833 dest:
> /127.0.0.1:50010 2012-09-15 14:20:02,761 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49833, dest: /127.0.0.1:50010, bytes: 20563, op:
> HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6641115981657984283_1079, duration: 2189000 2012-09-15
> 14:20:02,761 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> PacketResponder 0 for block blk_-6641115981657984283_1079 terminating
> 2012-09-15 14:20:02,776 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_-6285781475981276067_1080 src: /127.0.0.1:49835 dest:
> /127.0.0.1:50010 2012-09-15 14:20:02,777 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49835, dest: /127.0.0.1:50010, bytes: 106, op: HDFS_WRITE,
> cliID: DFSClient_1436299339, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6285781475981276067_1080, duration: 321000 2012-09-15
> 14:20:02,777 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> PacketResponder 0 for block blk_-6285781475981276067_1080 terminating
> 2012-09-15 14:20:02,781 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49836, bytes: 201, op: HDFS_READ,
> cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6638660277808139598_1076, duration: 152000 2012-09-15
> 14:20:05,555 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49838, bytes: 110, op: HDFS_READ,
> cliID: DFSClient_1214970016, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6285781475981276067_1080, duration: 158000 2012-09-15
> 14:20:05,563 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49840, bytes: 20658, op: HDFS_READ,
> cliID: DFSClient_1762809953, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 302000  ................
> ...............
> 2012-09-15 14:21:08,667 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49944, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 753000 2012-09-15 14:21:11,671
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49947, bytes: 3096, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 372000 2012-09-15 14:21:11,672
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49948, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 353000 2012-09-15 14:21:11,673
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49949, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 355000 2012-09-15 14:21:14,677
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49951, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 368000 2012-09-15 14:21:17,628
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_5156635908233241378_1080 src: /127.0.0.1:49953 dest:
> /127.0.0.1:50010 2012-09-15 14:21:17,630 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49953, dest: /127.0.0.1:50010, bytes: 28184, op:
> HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_5156635908233241378_1080, duration: 867000 2012-09-15 14:21:17,630
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
> 0 for block blk_5156635908233241378_1080 terminating 2012-09-15
> 14:21:21,128 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_-8793872286240925170_1064 file
> /Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-6641115981657984283_1079 file
> /Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-8793872286240925170_1064 at file
> /Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170
> 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-6285781475981276067_1080 file
> /Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-6282803597350612472_1068 file
> /Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-6641115981657984283_1079 at file
> /Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-1973500835155733464_1071 file
> /Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-6285781475981276067_1080 at file
> /Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-201819473056990539_1072 file
> /Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-6282803597350612472_1068 at file
> /Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_966543399440919118_1073 file
> /Users/jasonhuang/hdfs/data/current/blk_966543399440919118 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-1973500835155733464_1071 at file
> /Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_1230157759905402594_1069 file
> /Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-201819473056990539_1072 at file
> /Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 2012-09-15
> 14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_3059764143082927316_1070 file
> /Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_966543399440919118_1073 at file
> /Users/jasonhuang/hdfs/data/current/blk_966543399440919118 2012-09-15
> 14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_4471127410063335353_1066 file
> /Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_1230157759905402594_1069 at file
> /Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 2012-09-15
> 14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_5156635908233241378_1080 file
> /Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_3059764143082927316_1070 at file
> /Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_7335749996441800570_1065 file
> /Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 for
> deletion 2012-09-15 14:21:21,130 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_4471127410063335353_1066 at file
> /Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_7674314220695151815_1067 file
> /Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 for
> deletion 2012-09-15 14:21:21,130 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_5156635908233241378_1080 at file
> /Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Deleted block blk_7335749996441800570_1065 at file
> /Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Deleted block blk_7674314220695151815_1067 at file
> /Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 2012-09-15
> 14:29:48,016 INFO
> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
> succeeded for blk_5700986404331589806_1038
>
> Not sure where I should go next. Hope to get some help.
>
> I didn't  weak any checksum size related configs in my config files -
> actually I am not even aware of the ability to do in config files.
>
> thanks!
>
> Jason
>
>
> On Fri, Sep 14, 2012 at 10:05 PM, Harsh J <ha...@cloudera.com> wrote:
>> Hi Jason,
>>
>> Does the DN log have something in it that corresponds to these errors?
>> Is there also some stacktrace/further text after the line you've
>> pasted until? Can we have it?
>>
>> Also, did you tweak any checksum size related configs in your config files?
>>
>> On Sat, Sep 15, 2012 at 3:20 AM, Jason Huang <ja...@icare.com> wrote:
>>> Hello,
>>>
>>> Looking for some help in setting up hadoop 1.0.3 in Pseudo distributed mode...
>>>
>>> I was able to install hadoop, config the .xml files and start all nodes:
>>> $ JPS
>>> 6645 Jps
>>> 6030 SecondaryNameNode
>>> 6185 TaskTracker
>>> 5851 NameNode
>>> 6095 JobTracker
>>> 5939 DataNode
>>>
>>> However, when I tried to play around with a couple of Map-reduce jobs
>>> with provided example jar files I got the following errors:
>>>
>>> (1) $ bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
>>> Number of Maps  = 10
>>> Samples per Map = 100
>>> Wrote input for Map #0
>>> Wrote input for Map #1
>>> Wrote input for Map #2
>>> Wrote input for Map #3
>>> Wrote input for Map #4
>>> Wrote input for Map #5
>>> Wrote input for Map #6
>>> Wrote input for Map #7
>>> Wrote input for Map #8
>>> Wrote input for Map #9
>>> Starting Job
>>> 12/09/14 17:39:06 INFO mapred.FileInputFormat: Total input paths to process : 10
>>> 12/09/14 17:39:06 INFO mapred.JobClient: Running job: job_201209141701_0003
>>> 12/09/14 17:39:07 INFO mapred.JobClient:  map 0% reduce 0%
>>> 12/09/14 17:39:16 INFO mapred.JobClient: Task Id :
>>> attempt_201209141701_0003_m_000011_0, Status : FAILED
>>> Error initializing attempt_201209141701_0003_m_000011_0:
>>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>>> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>>>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>>>
>>> (2) $ ./bin/hadoop jar hadoop-examples-1.0.3.jar wordcount
>>> /user/jasonhuang/input /user/jasonhuang/output
>>> 12/09/14 17:37:51 INFO input.FileInputFormat: Total input paths to process : 1
>>> 12/09/14 17:37:51 WARN util.NativeCodeLoader: Unable to load
>>> native-hadoop library for your platform... using builtin-java classes
>>> where applicable
>>> 12/09/14 17:37:51 WARN snappy.LoadSnappy: Snappy native library not loaded
>>> 12/09/14 17:37:57 INFO mapred.JobClient: Cleaning up the staging area
>>> hdfs://localhost:9000/tmp/hadoop-jasonhuang/mapred/staging/jasonhuang/.staging/job_201209141701_0002
>>> 12/09/14 17:37:57 ERROR security.UserGroupInformation:
>>> PriviledgedActionException as:jasonhuang
>>> cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>>> 19968, dataLen : 1835351087, seqno : 7023413562532324724 (last: 0))
>>>
>>> Does anyone have idea on why the error occurs and how I can fix them?
>>>
>>> thanks!
>>>
>>> Jason
>>
>>
>>
>> --
>> Harsh J

Re: HDFS Error - BlockReader: error in packet header

Posted by Jason Huang <ja...@icare.com>.
I tried to reinstall hadoop and remove all the previous hdfs
directories and reformat the name node.

After that it appears to be working now.

thanks for looking at this!

Jason

On Sat, Sep 15, 2012 at 2:37 PM, Jason Huang <ja...@icare.com> wrote:
> Thanks Harsh.
>
> I've tried the following again:
> $ ./bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
>
> And I got the same error (sorry for having to paste this longggg log):
> Number of Maps  = 10
> Samples per Map = 100
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Starting Job
> 12/09/15 14:20:02 INFO mapred.FileInputFormat: Total input paths to process : 10
> 12/09/15 14:20:02 INFO mapred.JobClient: Running job: job_201209151409_0001
> 12/09/15 14:20:03 INFO mapred.JobClient:  map 0% reduce 0%
> 12/09/15 14:20:14 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000011_0, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000011_0:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stdout
> 12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stderr
> 12/09/15 14:20:23 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000011_1, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000011_1:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stdout
> 12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stderr
> 12/09/15 14:20:32 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000011_2, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000011_2:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stdout
> 12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stderr
> 12/09/15 14:20:50 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000010_0, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000010_0:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stdout
> 12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stderr
> 12/09/15 14:20:59 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000010_1, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000010_1:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stdout
> 12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stderr
> 12/09/15 14:21:08 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000010_2, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000010_2:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stdout
> 12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stderr
> 12/09/15 14:21:17 INFO mapred.JobClient: Job complete: job_201209151409_0001
> 12/09/15 14:21:17 INFO mapred.JobClient: Counters: 4
> 12/09/15 14:21:17 INFO mapred.JobClient:   Job Counters
> 12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=0
> 12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
> maps waiting after reserving slots (ms)=0
> 12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> 12/09/15 14:21:17 INFO mapred.JobClient: Job Failed: JobCleanup Task
> Failure, Task: task_201209151409_0001_m_000010
> java.io.IOException: Job failed!
>         at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
>         at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:297)
>         at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>         at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>         at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>         at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
>
> Here's the a 'brief' version of the data node log (but still very long...):
> 2012-09-15 14:20:02,640 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_39584416619615086_1077 src: /127.0.0.1:49829 dest:
> /127.0.0.1:50010 2012-09-15 14:20:02,642 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49829, dest: /127.0.0.1:50010, bytes: 20494, op:
> HDFS_WRITE, cliID: DFSClient_1015299679, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 565000 2012-09-15 14:20:02,642
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
> 0 for block blk_39584416619615086_1077 terminating 2012-09-15
> 14:20:02,663 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49831, bytes: 20658, op: HDFS_READ,
> cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 270000 2012-09-15 14:20:02,665
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49832, bytes: 18, op: HDFS_READ,
> cliID: DFSClient_672168163, offset: 20480, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 146000 2012-09-15 14:20:02,757
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_-6641115981657984283_1079 src: /127.0.0.1:49833 dest:
> /127.0.0.1:50010 2012-09-15 14:20:02,761 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49833, dest: /127.0.0.1:50010, bytes: 20563, op:
> HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6641115981657984283_1079, duration: 2189000 2012-09-15
> 14:20:02,761 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> PacketResponder 0 for block blk_-6641115981657984283_1079 terminating
> 2012-09-15 14:20:02,776 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_-6285781475981276067_1080 src: /127.0.0.1:49835 dest:
> /127.0.0.1:50010 2012-09-15 14:20:02,777 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49835, dest: /127.0.0.1:50010, bytes: 106, op: HDFS_WRITE,
> cliID: DFSClient_1436299339, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6285781475981276067_1080, duration: 321000 2012-09-15
> 14:20:02,777 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> PacketResponder 0 for block blk_-6285781475981276067_1080 terminating
> 2012-09-15 14:20:02,781 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49836, bytes: 201, op: HDFS_READ,
> cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6638660277808139598_1076, duration: 152000 2012-09-15
> 14:20:05,555 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49838, bytes: 110, op: HDFS_READ,
> cliID: DFSClient_1214970016, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6285781475981276067_1080, duration: 158000 2012-09-15
> 14:20:05,563 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49840, bytes: 20658, op: HDFS_READ,
> cliID: DFSClient_1762809953, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 302000  ................
> ...............
> 2012-09-15 14:21:08,667 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49944, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 753000 2012-09-15 14:21:11,671
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49947, bytes: 3096, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 372000 2012-09-15 14:21:11,672
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49948, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 353000 2012-09-15 14:21:11,673
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49949, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 355000 2012-09-15 14:21:14,677
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49951, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 368000 2012-09-15 14:21:17,628
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_5156635908233241378_1080 src: /127.0.0.1:49953 dest:
> /127.0.0.1:50010 2012-09-15 14:21:17,630 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49953, dest: /127.0.0.1:50010, bytes: 28184, op:
> HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_5156635908233241378_1080, duration: 867000 2012-09-15 14:21:17,630
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
> 0 for block blk_5156635908233241378_1080 terminating 2012-09-15
> 14:21:21,128 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_-8793872286240925170_1064 file
> /Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-6641115981657984283_1079 file
> /Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-8793872286240925170_1064 at file
> /Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170
> 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-6285781475981276067_1080 file
> /Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-6282803597350612472_1068 file
> /Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-6641115981657984283_1079 at file
> /Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-1973500835155733464_1071 file
> /Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-6285781475981276067_1080 at file
> /Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-201819473056990539_1072 file
> /Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-6282803597350612472_1068 at file
> /Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_966543399440919118_1073 file
> /Users/jasonhuang/hdfs/data/current/blk_966543399440919118 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-1973500835155733464_1071 at file
> /Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_1230157759905402594_1069 file
> /Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-201819473056990539_1072 at file
> /Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 2012-09-15
> 14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_3059764143082927316_1070 file
> /Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_966543399440919118_1073 at file
> /Users/jasonhuang/hdfs/data/current/blk_966543399440919118 2012-09-15
> 14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_4471127410063335353_1066 file
> /Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_1230157759905402594_1069 at file
> /Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 2012-09-15
> 14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_5156635908233241378_1080 file
> /Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_3059764143082927316_1070 at file
> /Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_7335749996441800570_1065 file
> /Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 for
> deletion 2012-09-15 14:21:21,130 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_4471127410063335353_1066 at file
> /Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_7674314220695151815_1067 file
> /Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 for
> deletion 2012-09-15 14:21:21,130 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_5156635908233241378_1080 at file
> /Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Deleted block blk_7335749996441800570_1065 at file
> /Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Deleted block blk_7674314220695151815_1067 at file
> /Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 2012-09-15
> 14:29:48,016 INFO
> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
> succeeded for blk_5700986404331589806_1038
>
> Not sure where I should go next. Hope to get some help.
>
> I didn't  weak any checksum size related configs in my config files -
> actually I am not even aware of the ability to do in config files.
>
> thanks!
>
> Jason
>
>
> On Fri, Sep 14, 2012 at 10:05 PM, Harsh J <ha...@cloudera.com> wrote:
>> Hi Jason,
>>
>> Does the DN log have something in it that corresponds to these errors?
>> Is there also some stacktrace/further text after the line you've
>> pasted until? Can we have it?
>>
>> Also, did you tweak any checksum size related configs in your config files?
>>
>> On Sat, Sep 15, 2012 at 3:20 AM, Jason Huang <ja...@icare.com> wrote:
>>> Hello,
>>>
>>> Looking for some help in setting up hadoop 1.0.3 in Pseudo distributed mode...
>>>
>>> I was able to install hadoop, config the .xml files and start all nodes:
>>> $ JPS
>>> 6645 Jps
>>> 6030 SecondaryNameNode
>>> 6185 TaskTracker
>>> 5851 NameNode
>>> 6095 JobTracker
>>> 5939 DataNode
>>>
>>> However, when I tried to play around with a couple of Map-reduce jobs
>>> with provided example jar files I got the following errors:
>>>
>>> (1) $ bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
>>> Number of Maps  = 10
>>> Samples per Map = 100
>>> Wrote input for Map #0
>>> Wrote input for Map #1
>>> Wrote input for Map #2
>>> Wrote input for Map #3
>>> Wrote input for Map #4
>>> Wrote input for Map #5
>>> Wrote input for Map #6
>>> Wrote input for Map #7
>>> Wrote input for Map #8
>>> Wrote input for Map #9
>>> Starting Job
>>> 12/09/14 17:39:06 INFO mapred.FileInputFormat: Total input paths to process : 10
>>> 12/09/14 17:39:06 INFO mapred.JobClient: Running job: job_201209141701_0003
>>> 12/09/14 17:39:07 INFO mapred.JobClient:  map 0% reduce 0%
>>> 12/09/14 17:39:16 INFO mapred.JobClient: Task Id :
>>> attempt_201209141701_0003_m_000011_0, Status : FAILED
>>> Error initializing attempt_201209141701_0003_m_000011_0:
>>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>>> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>>>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>>>
>>> (2) $ ./bin/hadoop jar hadoop-examples-1.0.3.jar wordcount
>>> /user/jasonhuang/input /user/jasonhuang/output
>>> 12/09/14 17:37:51 INFO input.FileInputFormat: Total input paths to process : 1
>>> 12/09/14 17:37:51 WARN util.NativeCodeLoader: Unable to load
>>> native-hadoop library for your platform... using builtin-java classes
>>> where applicable
>>> 12/09/14 17:37:51 WARN snappy.LoadSnappy: Snappy native library not loaded
>>> 12/09/14 17:37:57 INFO mapred.JobClient: Cleaning up the staging area
>>> hdfs://localhost:9000/tmp/hadoop-jasonhuang/mapred/staging/jasonhuang/.staging/job_201209141701_0002
>>> 12/09/14 17:37:57 ERROR security.UserGroupInformation:
>>> PriviledgedActionException as:jasonhuang
>>> cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>>> 19968, dataLen : 1835351087, seqno : 7023413562532324724 (last: 0))
>>>
>>> Does anyone have idea on why the error occurs and how I can fix them?
>>>
>>> thanks!
>>>
>>> Jason
>>
>>
>>
>> --
>> Harsh J

Re: HDFS Error - BlockReader: error in packet header

Posted by Jason Huang <ja...@icare.com>.
I tried to reinstall hadoop and remove all the previous hdfs
directories and reformat the name node.

After that it appears to be working now.

thanks for looking at this!

Jason

On Sat, Sep 15, 2012 at 2:37 PM, Jason Huang <ja...@icare.com> wrote:
> Thanks Harsh.
>
> I've tried the following again:
> $ ./bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
>
> And I got the same error (sorry for having to paste this longggg log):
> Number of Maps  = 10
> Samples per Map = 100
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Starting Job
> 12/09/15 14:20:02 INFO mapred.FileInputFormat: Total input paths to process : 10
> 12/09/15 14:20:02 INFO mapred.JobClient: Running job: job_201209151409_0001
> 12/09/15 14:20:03 INFO mapred.JobClient:  map 0% reduce 0%
> 12/09/15 14:20:14 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000011_0, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000011_0:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stdout
> 12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stderr
> 12/09/15 14:20:23 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000011_1, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000011_1:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stdout
> 12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stderr
> 12/09/15 14:20:32 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000011_2, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000011_2:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stdout
> 12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stderr
> 12/09/15 14:20:50 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000010_0, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000010_0:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stdout
> 12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stderr
> 12/09/15 14:20:59 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000010_1, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000010_1:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stdout
> 12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stderr
> 12/09/15 14:21:08 INFO mapred.JobClient: Task Id :
> attempt_201209151409_0001_m_000010_2, Status : FAILED
> Error initializing attempt_201209151409_0001_m_000010_2:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>         at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>         at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>         at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>         at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
>         at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
>         at java.io.DataInputStream.read(DataInputStream.java:100)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
>         at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>         at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>         at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:416)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>         at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>         at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>         at java.lang.Thread.run(Thread.java:636)
>
> 12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stdout
> 12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
> outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stderr
> 12/09/15 14:21:17 INFO mapred.JobClient: Job complete: job_201209151409_0001
> 12/09/15 14:21:17 INFO mapred.JobClient: Counters: 4
> 12/09/15 14:21:17 INFO mapred.JobClient:   Job Counters
> 12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=0
> 12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
> maps waiting after reserving slots (ms)=0
> 12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> 12/09/15 14:21:17 INFO mapred.JobClient: Job Failed: JobCleanup Task
> Failure, Task: task_201209151409_0001_m_000010
> java.io.IOException: Job failed!
>         at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
>         at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:297)
>         at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>         at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>         at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>         at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:616)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
>
> Here's the a 'brief' version of the data node log (but still very long...):
> 2012-09-15 14:20:02,640 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_39584416619615086_1077 src: /127.0.0.1:49829 dest:
> /127.0.0.1:50010 2012-09-15 14:20:02,642 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49829, dest: /127.0.0.1:50010, bytes: 20494, op:
> HDFS_WRITE, cliID: DFSClient_1015299679, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 565000 2012-09-15 14:20:02,642
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
> 0 for block blk_39584416619615086_1077 terminating 2012-09-15
> 14:20:02,663 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49831, bytes: 20658, op: HDFS_READ,
> cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 270000 2012-09-15 14:20:02,665
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49832, bytes: 18, op: HDFS_READ,
> cliID: DFSClient_672168163, offset: 20480, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 146000 2012-09-15 14:20:02,757
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_-6641115981657984283_1079 src: /127.0.0.1:49833 dest:
> /127.0.0.1:50010 2012-09-15 14:20:02,761 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49833, dest: /127.0.0.1:50010, bytes: 20563, op:
> HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6641115981657984283_1079, duration: 2189000 2012-09-15
> 14:20:02,761 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> PacketResponder 0 for block blk_-6641115981657984283_1079 terminating
> 2012-09-15 14:20:02,776 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_-6285781475981276067_1080 src: /127.0.0.1:49835 dest:
> /127.0.0.1:50010 2012-09-15 14:20:02,777 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49835, dest: /127.0.0.1:50010, bytes: 106, op: HDFS_WRITE,
> cliID: DFSClient_1436299339, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6285781475981276067_1080, duration: 321000 2012-09-15
> 14:20:02,777 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> PacketResponder 0 for block blk_-6285781475981276067_1080 terminating
> 2012-09-15 14:20:02,781 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49836, bytes: 201, op: HDFS_READ,
> cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6638660277808139598_1076, duration: 152000 2012-09-15
> 14:20:05,555 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49838, bytes: 110, op: HDFS_READ,
> cliID: DFSClient_1214970016, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_-6285781475981276067_1080, duration: 158000 2012-09-15
> 14:20:05,563 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49840, bytes: 20658, op: HDFS_READ,
> cliID: DFSClient_1762809953, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_39584416619615086_1077, duration: 302000  ................
> ...............
> 2012-09-15 14:21:08,667 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49944, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 753000 2012-09-15 14:21:11,671
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49947, bytes: 3096, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 372000 2012-09-15 14:21:11,672
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49948, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 353000 2012-09-15 14:21:11,673
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49949, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 355000 2012-09-15 14:21:14,677
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:50010, dest: /127.0.0.1:49951, bytes: 3216, op: HDFS_READ,
> cliID: DFSClient_36948932, offset: 139264, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_7326309033072036040_1074, duration: 368000 2012-09-15 14:21:17,628
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_5156635908233241378_1080 src: /127.0.0.1:49953 dest:
> /127.0.0.1:50010 2012-09-15 14:21:17,630 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /127.0.0.1:49953, dest: /127.0.0.1:50010, bytes: 28184, op:
> HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
> DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
> blk_5156635908233241378_1080, duration: 867000 2012-09-15 14:21:17,630
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
> 0 for block blk_5156635908233241378_1080 terminating 2012-09-15
> 14:21:21,128 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_-8793872286240925170_1064 file
> /Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-6641115981657984283_1079 file
> /Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-8793872286240925170_1064 at file
> /Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170
> 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-6285781475981276067_1080 file
> /Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-6282803597350612472_1068 file
> /Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472 for
> deletion 2012-09-15 14:21:21,128 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-6641115981657984283_1079 at file
> /Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-1973500835155733464_1071 file
> /Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-6285781475981276067_1080 at file
> /Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-201819473056990539_1072 file
> /Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-6282803597350612472_1068 at file
> /Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_966543399440919118_1073 file
> /Users/jasonhuang/hdfs/data/current/blk_966543399440919118 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-1973500835155733464_1071 at file
> /Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464
> 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_1230157759905402594_1069 file
> /Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-201819473056990539_1072 at file
> /Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 2012-09-15
> 14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_3059764143082927316_1070 file
> /Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_966543399440919118_1073 at file
> /Users/jasonhuang/hdfs/data/current/blk_966543399440919118 2012-09-15
> 14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_4471127410063335353_1066 file
> /Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_1230157759905402594_1069 at file
> /Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 2012-09-15
> 14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_5156635908233241378_1080 file
> /Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 for
> deletion 2012-09-15 14:21:21,129 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_3059764143082927316_1070 at file
> /Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_7335749996441800570_1065 file
> /Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 for
> deletion 2012-09-15 14:21:21,130 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_4471127410063335353_1066 at file
> /Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Scheduling block blk_7674314220695151815_1067 file
> /Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 for
> deletion 2012-09-15 14:21:21,130 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_5156635908233241378_1080 at file
> /Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Deleted block blk_7335749996441800570_1065 at file
> /Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 2012-09-15
> 14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Deleted block blk_7674314220695151815_1067 at file
> /Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 2012-09-15
> 14:29:48,016 INFO
> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
> succeeded for blk_5700986404331589806_1038
>
> Not sure where I should go next. Hope to get some help.
>
> I didn't  weak any checksum size related configs in my config files -
> actually I am not even aware of the ability to do in config files.
>
> thanks!
>
> Jason
>
>
> On Fri, Sep 14, 2012 at 10:05 PM, Harsh J <ha...@cloudera.com> wrote:
>> Hi Jason,
>>
>> Does the DN log have something in it that corresponds to these errors?
>> Is there also some stacktrace/further text after the line you've
>> pasted until? Can we have it?
>>
>> Also, did you tweak any checksum size related configs in your config files?
>>
>> On Sat, Sep 15, 2012 at 3:20 AM, Jason Huang <ja...@icare.com> wrote:
>>> Hello,
>>>
>>> Looking for some help in setting up hadoop 1.0.3 in Pseudo distributed mode...
>>>
>>> I was able to install hadoop, config the .xml files and start all nodes:
>>> $ JPS
>>> 6645 Jps
>>> 6030 SecondaryNameNode
>>> 6185 TaskTracker
>>> 5851 NameNode
>>> 6095 JobTracker
>>> 5939 DataNode
>>>
>>> However, when I tried to play around with a couple of Map-reduce jobs
>>> with provided example jar files I got the following errors:
>>>
>>> (1) $ bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
>>> Number of Maps  = 10
>>> Samples per Map = 100
>>> Wrote input for Map #0
>>> Wrote input for Map #1
>>> Wrote input for Map #2
>>> Wrote input for Map #3
>>> Wrote input for Map #4
>>> Wrote input for Map #5
>>> Wrote input for Map #6
>>> Wrote input for Map #7
>>> Wrote input for Map #8
>>> Wrote input for Map #9
>>> Starting Job
>>> 12/09/14 17:39:06 INFO mapred.FileInputFormat: Total input paths to process : 10
>>> 12/09/14 17:39:06 INFO mapred.JobClient: Running job: job_201209141701_0003
>>> 12/09/14 17:39:07 INFO mapred.JobClient:  map 0% reduce 0%
>>> 12/09/14 17:39:16 INFO mapred.JobClient: Task Id :
>>> attempt_201209141701_0003_m_000011_0, Status : FAILED
>>> Error initializing attempt_201209141701_0003_m_000011_0:
>>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>>> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>>>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>>>
>>> (2) $ ./bin/hadoop jar hadoop-examples-1.0.3.jar wordcount
>>> /user/jasonhuang/input /user/jasonhuang/output
>>> 12/09/14 17:37:51 INFO input.FileInputFormat: Total input paths to process : 1
>>> 12/09/14 17:37:51 WARN util.NativeCodeLoader: Unable to load
>>> native-hadoop library for your platform... using builtin-java classes
>>> where applicable
>>> 12/09/14 17:37:51 WARN snappy.LoadSnappy: Snappy native library not loaded
>>> 12/09/14 17:37:57 INFO mapred.JobClient: Cleaning up the staging area
>>> hdfs://localhost:9000/tmp/hadoop-jasonhuang/mapred/staging/jasonhuang/.staging/job_201209141701_0002
>>> 12/09/14 17:37:57 ERROR security.UserGroupInformation:
>>> PriviledgedActionException as:jasonhuang
>>> cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>>> 19968, dataLen : 1835351087, seqno : 7023413562532324724 (last: 0))
>>>
>>> Does anyone have idea on why the error occurs and how I can fix them?
>>>
>>> thanks!
>>>
>>> Jason
>>
>>
>>
>> --
>> Harsh J

Re: HDFS Error - BlockReader: error in packet header

Posted by Jason Huang <ja...@icare.com>.
Thanks Harsh.

I've tried the following again:
$ ./bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100

And I got the same error (sorry for having to paste this longggg log):
Number of Maps  = 10
Samples per Map = 100
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
12/09/15 14:20:02 INFO mapred.FileInputFormat: Total input paths to process : 10
12/09/15 14:20:02 INFO mapred.JobClient: Running job: job_201209151409_0001
12/09/15 14:20:03 INFO mapred.JobClient:  map 0% reduce 0%
12/09/15 14:20:14 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000011_0, Status : FAILED
Error initializing attempt_201209151409_0001_m_000011_0:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stdout
12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stderr
12/09/15 14:20:23 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000011_1, Status : FAILED
Error initializing attempt_201209151409_0001_m_000011_1:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stdout
12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stderr
12/09/15 14:20:32 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000011_2, Status : FAILED
Error initializing attempt_201209151409_0001_m_000011_2:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stdout
12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stderr
12/09/15 14:20:50 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000010_0, Status : FAILED
Error initializing attempt_201209151409_0001_m_000010_0:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stdout
12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stderr
12/09/15 14:20:59 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000010_1, Status : FAILED
Error initializing attempt_201209151409_0001_m_000010_1:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stdout
12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stderr
12/09/15 14:21:08 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000010_2, Status : FAILED
Error initializing attempt_201209151409_0001_m_000010_2:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stdout
12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stderr
12/09/15 14:21:17 INFO mapred.JobClient: Job complete: job_201209151409_0001
12/09/15 14:21:17 INFO mapred.JobClient: Counters: 4
12/09/15 14:21:17 INFO mapred.JobClient:   Job Counters
12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=0
12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0
12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
maps waiting after reserving slots (ms)=0
12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/09/15 14:21:17 INFO mapred.JobClient: Job Failed: JobCleanup Task
Failure, Task: task_201209151409_0001_m_000010
java.io.IOException: Job failed!
	at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
	at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:297)
	at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
	at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
	at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
	at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:156)


Here's the a 'brief' version of the data node log (but still very long...):
2012-09-15 14:20:02,640 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_39584416619615086_1077 src: /127.0.0.1:49829 dest:
/127.0.0.1:50010 2012-09-15 14:20:02,642 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49829, dest: /127.0.0.1:50010, bytes: 20494, op:
HDFS_WRITE, cliID: DFSClient_1015299679, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 565000 2012-09-15 14:20:02,642
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
0 for block blk_39584416619615086_1077 terminating 2012-09-15
14:20:02,663 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49831, bytes: 20658, op: HDFS_READ,
cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 270000 2012-09-15 14:20:02,665
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49832, bytes: 18, op: HDFS_READ,
cliID: DFSClient_672168163, offset: 20480, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 146000 2012-09-15 14:20:02,757
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_-6641115981657984283_1079 src: /127.0.0.1:49833 dest:
/127.0.0.1:50010 2012-09-15 14:20:02,761 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49833, dest: /127.0.0.1:50010, bytes: 20563, op:
HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6641115981657984283_1079, duration: 2189000 2012-09-15
14:20:02,761 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
PacketResponder 0 for block blk_-6641115981657984283_1079 terminating
2012-09-15 14:20:02,776 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_-6285781475981276067_1080 src: /127.0.0.1:49835 dest:
/127.0.0.1:50010 2012-09-15 14:20:02,777 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49835, dest: /127.0.0.1:50010, bytes: 106, op: HDFS_WRITE,
cliID: DFSClient_1436299339, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6285781475981276067_1080, duration: 321000 2012-09-15
14:20:02,777 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
PacketResponder 0 for block blk_-6285781475981276067_1080 terminating
2012-09-15 14:20:02,781 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49836, bytes: 201, op: HDFS_READ,
cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6638660277808139598_1076, duration: 152000 2012-09-15
14:20:05,555 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49838, bytes: 110, op: HDFS_READ,
cliID: DFSClient_1214970016, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6285781475981276067_1080, duration: 158000 2012-09-15
14:20:05,563 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49840, bytes: 20658, op: HDFS_READ,
cliID: DFSClient_1762809953, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 302000  ................
...............
2012-09-15 14:21:08,667 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49944, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 753000 2012-09-15 14:21:11,671
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49947, bytes: 3096, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 372000 2012-09-15 14:21:11,672
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49948, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 353000 2012-09-15 14:21:11,673
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49949, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 355000 2012-09-15 14:21:14,677
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49951, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 368000 2012-09-15 14:21:17,628
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_5156635908233241378_1080 src: /127.0.0.1:49953 dest:
/127.0.0.1:50010 2012-09-15 14:21:17,630 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49953, dest: /127.0.0.1:50010, bytes: 28184, op:
HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_5156635908233241378_1080, duration: 867000 2012-09-15 14:21:17,630
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
0 for block blk_5156635908233241378_1080 terminating 2012-09-15
14:21:21,128 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_-8793872286240925170_1064 file
/Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-6641115981657984283_1079 file
/Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-8793872286240925170_1064 at file
/Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170
2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-6285781475981276067_1080 file
/Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-6282803597350612472_1068 file
/Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-6641115981657984283_1079 at file
/Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-1973500835155733464_1071 file
/Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-6285781475981276067_1080 at file
/Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-201819473056990539_1072 file
/Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-6282803597350612472_1068 at file
/Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_966543399440919118_1073 file
/Users/jasonhuang/hdfs/data/current/blk_966543399440919118 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-1973500835155733464_1071 at file
/Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_1230157759905402594_1069 file
/Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-201819473056990539_1072 at file
/Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 2012-09-15
14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_3059764143082927316_1070 file
/Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_966543399440919118_1073 at file
/Users/jasonhuang/hdfs/data/current/blk_966543399440919118 2012-09-15
14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_4471127410063335353_1066 file
/Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_1230157759905402594_1069 at file
/Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 2012-09-15
14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_5156635908233241378_1080 file
/Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_3059764143082927316_1070 at file
/Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_7335749996441800570_1065 file
/Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 for
deletion 2012-09-15 14:21:21,130 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_4471127410063335353_1066 at file
/Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_7674314220695151815_1067 file
/Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 for
deletion 2012-09-15 14:21:21,130 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_5156635908233241378_1080 at file
/Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Deleted block blk_7335749996441800570_1065 at file
/Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Deleted block blk_7674314220695151815_1067 at file
/Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 2012-09-15
14:29:48,016 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
succeeded for blk_5700986404331589806_1038

Not sure where I should go next. Hope to get some help.

I didn't  weak any checksum size related configs in my config files -
actually I am not even aware of the ability to do in config files.

thanks!

Jason


On Fri, Sep 14, 2012 at 10:05 PM, Harsh J <ha...@cloudera.com> wrote:
> Hi Jason,
>
> Does the DN log have something in it that corresponds to these errors?
> Is there also some stacktrace/further text after the line you've
> pasted until? Can we have it?
>
> Also, did you tweak any checksum size related configs in your config files?
>
> On Sat, Sep 15, 2012 at 3:20 AM, Jason Huang <ja...@icare.com> wrote:
>> Hello,
>>
>> Looking for some help in setting up hadoop 1.0.3 in Pseudo distributed mode...
>>
>> I was able to install hadoop, config the .xml files and start all nodes:
>> $ JPS
>> 6645 Jps
>> 6030 SecondaryNameNode
>> 6185 TaskTracker
>> 5851 NameNode
>> 6095 JobTracker
>> 5939 DataNode
>>
>> However, when I tried to play around with a couple of Map-reduce jobs
>> with provided example jar files I got the following errors:
>>
>> (1) $ bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
>> Number of Maps  = 10
>> Samples per Map = 100
>> Wrote input for Map #0
>> Wrote input for Map #1
>> Wrote input for Map #2
>> Wrote input for Map #3
>> Wrote input for Map #4
>> Wrote input for Map #5
>> Wrote input for Map #6
>> Wrote input for Map #7
>> Wrote input for Map #8
>> Wrote input for Map #9
>> Starting Job
>> 12/09/14 17:39:06 INFO mapred.FileInputFormat: Total input paths to process : 10
>> 12/09/14 17:39:06 INFO mapred.JobClient: Running job: job_201209141701_0003
>> 12/09/14 17:39:07 INFO mapred.JobClient:  map 0% reduce 0%
>> 12/09/14 17:39:16 INFO mapred.JobClient: Task Id :
>> attempt_201209141701_0003_m_000011_0, Status : FAILED
>> Error initializing attempt_201209141701_0003_m_000011_0:
>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>>
>> (2) $ ./bin/hadoop jar hadoop-examples-1.0.3.jar wordcount
>> /user/jasonhuang/input /user/jasonhuang/output
>> 12/09/14 17:37:51 INFO input.FileInputFormat: Total input paths to process : 1
>> 12/09/14 17:37:51 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes
>> where applicable
>> 12/09/14 17:37:51 WARN snappy.LoadSnappy: Snappy native library not loaded
>> 12/09/14 17:37:57 INFO mapred.JobClient: Cleaning up the staging area
>> hdfs://localhost:9000/tmp/hadoop-jasonhuang/mapred/staging/jasonhuang/.staging/job_201209141701_0002
>> 12/09/14 17:37:57 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:jasonhuang
>> cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>> 19968, dataLen : 1835351087, seqno : 7023413562532324724 (last: 0))
>>
>> Does anyone have idea on why the error occurs and how I can fix them?
>>
>> thanks!
>>
>> Jason
>
>
>
> --
> Harsh J

Re: HDFS Error - BlockReader: error in packet header

Posted by Jason Huang <ja...@icare.com>.
Thanks Harsh.

I've tried the following again:
$ ./bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100

And I got the same error (sorry for having to paste this longggg log):
Number of Maps  = 10
Samples per Map = 100
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
12/09/15 14:20:02 INFO mapred.FileInputFormat: Total input paths to process : 10
12/09/15 14:20:02 INFO mapred.JobClient: Running job: job_201209151409_0001
12/09/15 14:20:03 INFO mapred.JobClient:  map 0% reduce 0%
12/09/15 14:20:14 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000011_0, Status : FAILED
Error initializing attempt_201209151409_0001_m_000011_0:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stdout
12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stderr
12/09/15 14:20:23 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000011_1, Status : FAILED
Error initializing attempt_201209151409_0001_m_000011_1:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stdout
12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stderr
12/09/15 14:20:32 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000011_2, Status : FAILED
Error initializing attempt_201209151409_0001_m_000011_2:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stdout
12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stderr
12/09/15 14:20:50 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000010_0, Status : FAILED
Error initializing attempt_201209151409_0001_m_000010_0:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stdout
12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stderr
12/09/15 14:20:59 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000010_1, Status : FAILED
Error initializing attempt_201209151409_0001_m_000010_1:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stdout
12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stderr
12/09/15 14:21:08 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000010_2, Status : FAILED
Error initializing attempt_201209151409_0001_m_000010_2:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stdout
12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stderr
12/09/15 14:21:17 INFO mapred.JobClient: Job complete: job_201209151409_0001
12/09/15 14:21:17 INFO mapred.JobClient: Counters: 4
12/09/15 14:21:17 INFO mapred.JobClient:   Job Counters
12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=0
12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0
12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
maps waiting after reserving slots (ms)=0
12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/09/15 14:21:17 INFO mapred.JobClient: Job Failed: JobCleanup Task
Failure, Task: task_201209151409_0001_m_000010
java.io.IOException: Job failed!
	at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
	at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:297)
	at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
	at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
	at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
	at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:156)


Here's the a 'brief' version of the data node log (but still very long...):
2012-09-15 14:20:02,640 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_39584416619615086_1077 src: /127.0.0.1:49829 dest:
/127.0.0.1:50010 2012-09-15 14:20:02,642 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49829, dest: /127.0.0.1:50010, bytes: 20494, op:
HDFS_WRITE, cliID: DFSClient_1015299679, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 565000 2012-09-15 14:20:02,642
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
0 for block blk_39584416619615086_1077 terminating 2012-09-15
14:20:02,663 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49831, bytes: 20658, op: HDFS_READ,
cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 270000 2012-09-15 14:20:02,665
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49832, bytes: 18, op: HDFS_READ,
cliID: DFSClient_672168163, offset: 20480, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 146000 2012-09-15 14:20:02,757
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_-6641115981657984283_1079 src: /127.0.0.1:49833 dest:
/127.0.0.1:50010 2012-09-15 14:20:02,761 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49833, dest: /127.0.0.1:50010, bytes: 20563, op:
HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6641115981657984283_1079, duration: 2189000 2012-09-15
14:20:02,761 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
PacketResponder 0 for block blk_-6641115981657984283_1079 terminating
2012-09-15 14:20:02,776 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_-6285781475981276067_1080 src: /127.0.0.1:49835 dest:
/127.0.0.1:50010 2012-09-15 14:20:02,777 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49835, dest: /127.0.0.1:50010, bytes: 106, op: HDFS_WRITE,
cliID: DFSClient_1436299339, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6285781475981276067_1080, duration: 321000 2012-09-15
14:20:02,777 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
PacketResponder 0 for block blk_-6285781475981276067_1080 terminating
2012-09-15 14:20:02,781 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49836, bytes: 201, op: HDFS_READ,
cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6638660277808139598_1076, duration: 152000 2012-09-15
14:20:05,555 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49838, bytes: 110, op: HDFS_READ,
cliID: DFSClient_1214970016, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6285781475981276067_1080, duration: 158000 2012-09-15
14:20:05,563 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49840, bytes: 20658, op: HDFS_READ,
cliID: DFSClient_1762809953, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 302000  ................
...............
2012-09-15 14:21:08,667 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49944, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 753000 2012-09-15 14:21:11,671
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49947, bytes: 3096, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 372000 2012-09-15 14:21:11,672
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49948, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 353000 2012-09-15 14:21:11,673
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49949, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 355000 2012-09-15 14:21:14,677
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49951, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 368000 2012-09-15 14:21:17,628
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_5156635908233241378_1080 src: /127.0.0.1:49953 dest:
/127.0.0.1:50010 2012-09-15 14:21:17,630 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49953, dest: /127.0.0.1:50010, bytes: 28184, op:
HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_5156635908233241378_1080, duration: 867000 2012-09-15 14:21:17,630
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
0 for block blk_5156635908233241378_1080 terminating 2012-09-15
14:21:21,128 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_-8793872286240925170_1064 file
/Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-6641115981657984283_1079 file
/Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-8793872286240925170_1064 at file
/Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170
2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-6285781475981276067_1080 file
/Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-6282803597350612472_1068 file
/Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-6641115981657984283_1079 at file
/Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-1973500835155733464_1071 file
/Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-6285781475981276067_1080 at file
/Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-201819473056990539_1072 file
/Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-6282803597350612472_1068 at file
/Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_966543399440919118_1073 file
/Users/jasonhuang/hdfs/data/current/blk_966543399440919118 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-1973500835155733464_1071 at file
/Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_1230157759905402594_1069 file
/Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-201819473056990539_1072 at file
/Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 2012-09-15
14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_3059764143082927316_1070 file
/Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_966543399440919118_1073 at file
/Users/jasonhuang/hdfs/data/current/blk_966543399440919118 2012-09-15
14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_4471127410063335353_1066 file
/Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_1230157759905402594_1069 at file
/Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 2012-09-15
14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_5156635908233241378_1080 file
/Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_3059764143082927316_1070 at file
/Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_7335749996441800570_1065 file
/Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 for
deletion 2012-09-15 14:21:21,130 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_4471127410063335353_1066 at file
/Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_7674314220695151815_1067 file
/Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 for
deletion 2012-09-15 14:21:21,130 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_5156635908233241378_1080 at file
/Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Deleted block blk_7335749996441800570_1065 at file
/Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Deleted block blk_7674314220695151815_1067 at file
/Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 2012-09-15
14:29:48,016 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
succeeded for blk_5700986404331589806_1038

Not sure where I should go next. Hope to get some help.

I didn't  weak any checksum size related configs in my config files -
actually I am not even aware of the ability to do in config files.

thanks!

Jason


On Fri, Sep 14, 2012 at 10:05 PM, Harsh J <ha...@cloudera.com> wrote:
> Hi Jason,
>
> Does the DN log have something in it that corresponds to these errors?
> Is there also some stacktrace/further text after the line you've
> pasted until? Can we have it?
>
> Also, did you tweak any checksum size related configs in your config files?
>
> On Sat, Sep 15, 2012 at 3:20 AM, Jason Huang <ja...@icare.com> wrote:
>> Hello,
>>
>> Looking for some help in setting up hadoop 1.0.3 in Pseudo distributed mode...
>>
>> I was able to install hadoop, config the .xml files and start all nodes:
>> $ JPS
>> 6645 Jps
>> 6030 SecondaryNameNode
>> 6185 TaskTracker
>> 5851 NameNode
>> 6095 JobTracker
>> 5939 DataNode
>>
>> However, when I tried to play around with a couple of Map-reduce jobs
>> with provided example jar files I got the following errors:
>>
>> (1) $ bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
>> Number of Maps  = 10
>> Samples per Map = 100
>> Wrote input for Map #0
>> Wrote input for Map #1
>> Wrote input for Map #2
>> Wrote input for Map #3
>> Wrote input for Map #4
>> Wrote input for Map #5
>> Wrote input for Map #6
>> Wrote input for Map #7
>> Wrote input for Map #8
>> Wrote input for Map #9
>> Starting Job
>> 12/09/14 17:39:06 INFO mapred.FileInputFormat: Total input paths to process : 10
>> 12/09/14 17:39:06 INFO mapred.JobClient: Running job: job_201209141701_0003
>> 12/09/14 17:39:07 INFO mapred.JobClient:  map 0% reduce 0%
>> 12/09/14 17:39:16 INFO mapred.JobClient: Task Id :
>> attempt_201209141701_0003_m_000011_0, Status : FAILED
>> Error initializing attempt_201209141701_0003_m_000011_0:
>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>>
>> (2) $ ./bin/hadoop jar hadoop-examples-1.0.3.jar wordcount
>> /user/jasonhuang/input /user/jasonhuang/output
>> 12/09/14 17:37:51 INFO input.FileInputFormat: Total input paths to process : 1
>> 12/09/14 17:37:51 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes
>> where applicable
>> 12/09/14 17:37:51 WARN snappy.LoadSnappy: Snappy native library not loaded
>> 12/09/14 17:37:57 INFO mapred.JobClient: Cleaning up the staging area
>> hdfs://localhost:9000/tmp/hadoop-jasonhuang/mapred/staging/jasonhuang/.staging/job_201209141701_0002
>> 12/09/14 17:37:57 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:jasonhuang
>> cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>> 19968, dataLen : 1835351087, seqno : 7023413562532324724 (last: 0))
>>
>> Does anyone have idea on why the error occurs and how I can fix them?
>>
>> thanks!
>>
>> Jason
>
>
>
> --
> Harsh J

Re: HDFS Error - BlockReader: error in packet header

Posted by Jason Huang <ja...@icare.com>.
Thanks Harsh.

I've tried the following again:
$ ./bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100

And I got the same error (sorry for having to paste this longggg log):
Number of Maps  = 10
Samples per Map = 100
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
12/09/15 14:20:02 INFO mapred.FileInputFormat: Total input paths to process : 10
12/09/15 14:20:02 INFO mapred.JobClient: Running job: job_201209151409_0001
12/09/15 14:20:03 INFO mapred.JobClient:  map 0% reduce 0%
12/09/15 14:20:14 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000011_0, Status : FAILED
Error initializing attempt_201209151409_0001_m_000011_0:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stdout
12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stderr
12/09/15 14:20:23 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000011_1, Status : FAILED
Error initializing attempt_201209151409_0001_m_000011_1:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stdout
12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stderr
12/09/15 14:20:32 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000011_2, Status : FAILED
Error initializing attempt_201209151409_0001_m_000011_2:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stdout
12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stderr
12/09/15 14:20:50 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000010_0, Status : FAILED
Error initializing attempt_201209151409_0001_m_000010_0:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stdout
12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stderr
12/09/15 14:20:59 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000010_1, Status : FAILED
Error initializing attempt_201209151409_0001_m_000010_1:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stdout
12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stderr
12/09/15 14:21:08 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000010_2, Status : FAILED
Error initializing attempt_201209151409_0001_m_000010_2:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stdout
12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stderr
12/09/15 14:21:17 INFO mapred.JobClient: Job complete: job_201209151409_0001
12/09/15 14:21:17 INFO mapred.JobClient: Counters: 4
12/09/15 14:21:17 INFO mapred.JobClient:   Job Counters
12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=0
12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0
12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
maps waiting after reserving slots (ms)=0
12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/09/15 14:21:17 INFO mapred.JobClient: Job Failed: JobCleanup Task
Failure, Task: task_201209151409_0001_m_000010
java.io.IOException: Job failed!
	at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
	at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:297)
	at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
	at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
	at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
	at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:156)


Here's the a 'brief' version of the data node log (but still very long...):
2012-09-15 14:20:02,640 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_39584416619615086_1077 src: /127.0.0.1:49829 dest:
/127.0.0.1:50010 2012-09-15 14:20:02,642 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49829, dest: /127.0.0.1:50010, bytes: 20494, op:
HDFS_WRITE, cliID: DFSClient_1015299679, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 565000 2012-09-15 14:20:02,642
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
0 for block blk_39584416619615086_1077 terminating 2012-09-15
14:20:02,663 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49831, bytes: 20658, op: HDFS_READ,
cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 270000 2012-09-15 14:20:02,665
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49832, bytes: 18, op: HDFS_READ,
cliID: DFSClient_672168163, offset: 20480, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 146000 2012-09-15 14:20:02,757
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_-6641115981657984283_1079 src: /127.0.0.1:49833 dest:
/127.0.0.1:50010 2012-09-15 14:20:02,761 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49833, dest: /127.0.0.1:50010, bytes: 20563, op:
HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6641115981657984283_1079, duration: 2189000 2012-09-15
14:20:02,761 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
PacketResponder 0 for block blk_-6641115981657984283_1079 terminating
2012-09-15 14:20:02,776 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_-6285781475981276067_1080 src: /127.0.0.1:49835 dest:
/127.0.0.1:50010 2012-09-15 14:20:02,777 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49835, dest: /127.0.0.1:50010, bytes: 106, op: HDFS_WRITE,
cliID: DFSClient_1436299339, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6285781475981276067_1080, duration: 321000 2012-09-15
14:20:02,777 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
PacketResponder 0 for block blk_-6285781475981276067_1080 terminating
2012-09-15 14:20:02,781 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49836, bytes: 201, op: HDFS_READ,
cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6638660277808139598_1076, duration: 152000 2012-09-15
14:20:05,555 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49838, bytes: 110, op: HDFS_READ,
cliID: DFSClient_1214970016, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6285781475981276067_1080, duration: 158000 2012-09-15
14:20:05,563 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49840, bytes: 20658, op: HDFS_READ,
cliID: DFSClient_1762809953, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 302000  ................
...............
2012-09-15 14:21:08,667 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49944, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 753000 2012-09-15 14:21:11,671
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49947, bytes: 3096, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 372000 2012-09-15 14:21:11,672
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49948, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 353000 2012-09-15 14:21:11,673
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49949, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 355000 2012-09-15 14:21:14,677
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49951, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 368000 2012-09-15 14:21:17,628
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_5156635908233241378_1080 src: /127.0.0.1:49953 dest:
/127.0.0.1:50010 2012-09-15 14:21:17,630 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49953, dest: /127.0.0.1:50010, bytes: 28184, op:
HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_5156635908233241378_1080, duration: 867000 2012-09-15 14:21:17,630
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
0 for block blk_5156635908233241378_1080 terminating 2012-09-15
14:21:21,128 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_-8793872286240925170_1064 file
/Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-6641115981657984283_1079 file
/Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-8793872286240925170_1064 at file
/Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170
2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-6285781475981276067_1080 file
/Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-6282803597350612472_1068 file
/Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-6641115981657984283_1079 at file
/Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-1973500835155733464_1071 file
/Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-6285781475981276067_1080 at file
/Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-201819473056990539_1072 file
/Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-6282803597350612472_1068 at file
/Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_966543399440919118_1073 file
/Users/jasonhuang/hdfs/data/current/blk_966543399440919118 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-1973500835155733464_1071 at file
/Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_1230157759905402594_1069 file
/Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-201819473056990539_1072 at file
/Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 2012-09-15
14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_3059764143082927316_1070 file
/Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_966543399440919118_1073 at file
/Users/jasonhuang/hdfs/data/current/blk_966543399440919118 2012-09-15
14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_4471127410063335353_1066 file
/Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_1230157759905402594_1069 at file
/Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 2012-09-15
14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_5156635908233241378_1080 file
/Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_3059764143082927316_1070 at file
/Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_7335749996441800570_1065 file
/Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 for
deletion 2012-09-15 14:21:21,130 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_4471127410063335353_1066 at file
/Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_7674314220695151815_1067 file
/Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 for
deletion 2012-09-15 14:21:21,130 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_5156635908233241378_1080 at file
/Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Deleted block blk_7335749996441800570_1065 at file
/Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Deleted block blk_7674314220695151815_1067 at file
/Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 2012-09-15
14:29:48,016 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
succeeded for blk_5700986404331589806_1038

Not sure where I should go next. Hope to get some help.

I didn't  weak any checksum size related configs in my config files -
actually I am not even aware of the ability to do in config files.

thanks!

Jason


On Fri, Sep 14, 2012 at 10:05 PM, Harsh J <ha...@cloudera.com> wrote:
> Hi Jason,
>
> Does the DN log have something in it that corresponds to these errors?
> Is there also some stacktrace/further text after the line you've
> pasted until? Can we have it?
>
> Also, did you tweak any checksum size related configs in your config files?
>
> On Sat, Sep 15, 2012 at 3:20 AM, Jason Huang <ja...@icare.com> wrote:
>> Hello,
>>
>> Looking for some help in setting up hadoop 1.0.3 in Pseudo distributed mode...
>>
>> I was able to install hadoop, config the .xml files and start all nodes:
>> $ JPS
>> 6645 Jps
>> 6030 SecondaryNameNode
>> 6185 TaskTracker
>> 5851 NameNode
>> 6095 JobTracker
>> 5939 DataNode
>>
>> However, when I tried to play around with a couple of Map-reduce jobs
>> with provided example jar files I got the following errors:
>>
>> (1) $ bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
>> Number of Maps  = 10
>> Samples per Map = 100
>> Wrote input for Map #0
>> Wrote input for Map #1
>> Wrote input for Map #2
>> Wrote input for Map #3
>> Wrote input for Map #4
>> Wrote input for Map #5
>> Wrote input for Map #6
>> Wrote input for Map #7
>> Wrote input for Map #8
>> Wrote input for Map #9
>> Starting Job
>> 12/09/14 17:39:06 INFO mapred.FileInputFormat: Total input paths to process : 10
>> 12/09/14 17:39:06 INFO mapred.JobClient: Running job: job_201209141701_0003
>> 12/09/14 17:39:07 INFO mapred.JobClient:  map 0% reduce 0%
>> 12/09/14 17:39:16 INFO mapred.JobClient: Task Id :
>> attempt_201209141701_0003_m_000011_0, Status : FAILED
>> Error initializing attempt_201209141701_0003_m_000011_0:
>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>>
>> (2) $ ./bin/hadoop jar hadoop-examples-1.0.3.jar wordcount
>> /user/jasonhuang/input /user/jasonhuang/output
>> 12/09/14 17:37:51 INFO input.FileInputFormat: Total input paths to process : 1
>> 12/09/14 17:37:51 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes
>> where applicable
>> 12/09/14 17:37:51 WARN snappy.LoadSnappy: Snappy native library not loaded
>> 12/09/14 17:37:57 INFO mapred.JobClient: Cleaning up the staging area
>> hdfs://localhost:9000/tmp/hadoop-jasonhuang/mapred/staging/jasonhuang/.staging/job_201209141701_0002
>> 12/09/14 17:37:57 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:jasonhuang
>> cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>> 19968, dataLen : 1835351087, seqno : 7023413562532324724 (last: 0))
>>
>> Does anyone have idea on why the error occurs and how I can fix them?
>>
>> thanks!
>>
>> Jason
>
>
>
> --
> Harsh J

Re: HDFS Error - BlockReader: error in packet header

Posted by Jason Huang <ja...@icare.com>.
Thanks Harsh.

I've tried the following again:
$ ./bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100

And I got the same error (sorry for having to paste this longggg log):
Number of Maps  = 10
Samples per Map = 100
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
12/09/15 14:20:02 INFO mapred.FileInputFormat: Total input paths to process : 10
12/09/15 14:20:02 INFO mapred.JobClient: Running job: job_201209151409_0001
12/09/15 14:20:03 INFO mapred.JobClient:  map 0% reduce 0%
12/09/15 14:20:14 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000011_0, Status : FAILED
Error initializing attempt_201209151409_0001_m_000011_0:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stdout
12/09/15 14:20:14 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_0&filter=stderr
12/09/15 14:20:23 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000011_1, Status : FAILED
Error initializing attempt_201209151409_0001_m_000011_1:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stdout
12/09/15 14:20:23 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_1&filter=stderr
12/09/15 14:20:32 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000011_2, Status : FAILED
Error initializing attempt_201209151409_0001_m_000011_2:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stdout
12/09/15 14:20:32 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000011_2&filter=stderr
12/09/15 14:20:50 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000010_0, Status : FAILED
Error initializing attempt_201209151409_0001_m_000010_0:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stdout
12/09/15 14:20:50 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_0&filter=stderr
12/09/15 14:20:59 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000010_1, Status : FAILED
Error initializing attempt_201209151409_0001_m_000010_1:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stdout
12/09/15 14:20:59 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_1&filter=stderr
12/09/15 14:21:08 INFO mapred.JobClient: Task Id :
attempt_201209151409_0001_m_000010_2, Status : FAILED
Error initializing attempt_201209151409_0001_m_000010_2:
java.io.IOException: BlockReader: error in packet header(chunkOffset :
142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
	at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
	at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
	at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
	at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
	at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1460)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:2175)
	at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2227)
	at java.io.DataInputStream.read(DataInputStream.java:100)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:74)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
	at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:230)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1222)
	at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1203)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:273)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
	at org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
	at org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
	at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:416)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
	at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
	at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
	at java.lang.Thread.run(Thread.java:636)

12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stdout
12/09/15 14:21:08 WARN mapred.JobClient: Error reading task
outputhttp://192.168.1.124:50060/tasklog?plaintext=true&attemptid=attempt_201209151409_0001_m_000010_2&filter=stderr
12/09/15 14:21:17 INFO mapred.JobClient: Job complete: job_201209151409_0001
12/09/15 14:21:17 INFO mapred.JobClient: Counters: 4
12/09/15 14:21:17 INFO mapred.JobClient:   Job Counters
12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=0
12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0
12/09/15 14:21:17 INFO mapred.JobClient:     Total time spent by all
maps waiting after reserving slots (ms)=0
12/09/15 14:21:17 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/09/15 14:21:17 INFO mapred.JobClient: Job Failed: JobCleanup Task
Failure, Task: task_201209151409_0001_m_000010
java.io.IOException: Job failed!
	at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
	at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:297)
	at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
	at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
	at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
	at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:616)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:156)


Here's the a 'brief' version of the data node log (but still very long...):
2012-09-15 14:20:02,640 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_39584416619615086_1077 src: /127.0.0.1:49829 dest:
/127.0.0.1:50010 2012-09-15 14:20:02,642 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49829, dest: /127.0.0.1:50010, bytes: 20494, op:
HDFS_WRITE, cliID: DFSClient_1015299679, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 565000 2012-09-15 14:20:02,642
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
0 for block blk_39584416619615086_1077 terminating 2012-09-15
14:20:02,663 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49831, bytes: 20658, op: HDFS_READ,
cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 270000 2012-09-15 14:20:02,665
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49832, bytes: 18, op: HDFS_READ,
cliID: DFSClient_672168163, offset: 20480, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 146000 2012-09-15 14:20:02,757
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_-6641115981657984283_1079 src: /127.0.0.1:49833 dest:
/127.0.0.1:50010 2012-09-15 14:20:02,761 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49833, dest: /127.0.0.1:50010, bytes: 20563, op:
HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6641115981657984283_1079, duration: 2189000 2012-09-15
14:20:02,761 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
PacketResponder 0 for block blk_-6641115981657984283_1079 terminating
2012-09-15 14:20:02,776 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_-6285781475981276067_1080 src: /127.0.0.1:49835 dest:
/127.0.0.1:50010 2012-09-15 14:20:02,777 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49835, dest: /127.0.0.1:50010, bytes: 106, op: HDFS_WRITE,
cliID: DFSClient_1436299339, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6285781475981276067_1080, duration: 321000 2012-09-15
14:20:02,777 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
PacketResponder 0 for block blk_-6285781475981276067_1080 terminating
2012-09-15 14:20:02,781 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49836, bytes: 201, op: HDFS_READ,
cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6638660277808139598_1076, duration: 152000 2012-09-15
14:20:05,555 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49838, bytes: 110, op: HDFS_READ,
cliID: DFSClient_1214970016, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_-6285781475981276067_1080, duration: 158000 2012-09-15
14:20:05,563 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49840, bytes: 20658, op: HDFS_READ,
cliID: DFSClient_1762809953, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_39584416619615086_1077, duration: 302000  ................
...............
2012-09-15 14:21:08,667 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49944, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 753000 2012-09-15 14:21:11,671
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49947, bytes: 3096, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 372000 2012-09-15 14:21:11,672
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49948, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 353000 2012-09-15 14:21:11,673
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49949, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 355000 2012-09-15 14:21:14,677
INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:50010, dest: /127.0.0.1:49951, bytes: 3216, op: HDFS_READ,
cliID: DFSClient_36948932, offset: 139264, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_7326309033072036040_1074, duration: 368000 2012-09-15 14:21:17,628
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_5156635908233241378_1080 src: /127.0.0.1:49953 dest:
/127.0.0.1:50010 2012-09-15 14:21:17,630 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/127.0.0.1:49953, dest: /127.0.0.1:50010, bytes: 28184, op:
HDFS_WRITE, cliID: DFSClient_672168163, offset: 0, srvID:
DS-1101353210-192.168.10.23-50010-1347651592008, blockid:
blk_5156635908233241378_1080, duration: 867000 2012-09-15 14:21:17,630
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
0 for block blk_5156635908233241378_1080 terminating 2012-09-15
14:21:21,128 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_-8793872286240925170_1064 file
/Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-6641115981657984283_1079 file
/Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-8793872286240925170_1064 at file
/Users/jasonhuang/hdfs/data/current/blk_-8793872286240925170
2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-6285781475981276067_1080 file
/Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-6282803597350612472_1068 file
/Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472 for
deletion 2012-09-15 14:21:21,128 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-6641115981657984283_1079 at file
/Users/jasonhuang/hdfs/data/current/blk_-6641115981657984283
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-1973500835155733464_1071 file
/Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-6285781475981276067_1080 at file
/Users/jasonhuang/hdfs/data/current/blk_-6285781475981276067
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_-201819473056990539_1072 file
/Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-6282803597350612472_1068 at file
/Users/jasonhuang/hdfs/data/current/blk_-6282803597350612472
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_966543399440919118_1073 file
/Users/jasonhuang/hdfs/data/current/blk_966543399440919118 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-1973500835155733464_1071 at file
/Users/jasonhuang/hdfs/data/current/blk_-1973500835155733464
2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
blk_1230157759905402594_1069 file
/Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_-201819473056990539_1072 at file
/Users/jasonhuang/hdfs/data/current/blk_-201819473056990539 2012-09-15
14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_3059764143082927316_1070 file
/Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_966543399440919118_1073 at file
/Users/jasonhuang/hdfs/data/current/blk_966543399440919118 2012-09-15
14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_4471127410063335353_1066 file
/Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_1230157759905402594_1069 at file
/Users/jasonhuang/hdfs/data/current/blk_1230157759905402594 2012-09-15
14:21:21,129 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_5156635908233241378_1080 file
/Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 for
deletion 2012-09-15 14:21:21,129 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_3059764143082927316_1070 at file
/Users/jasonhuang/hdfs/data/current/blk_3059764143082927316 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_7335749996441800570_1065 file
/Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 for
deletion 2012-09-15 14:21:21,130 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_4471127410063335353_1066 at file
/Users/jasonhuang/hdfs/data/current/blk_4471127410063335353 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Scheduling block blk_7674314220695151815_1067 file
/Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 for
deletion 2012-09-15 14:21:21,130 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
blk_5156635908233241378_1080 at file
/Users/jasonhuang/hdfs/data/current/blk_5156635908233241378 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Deleted block blk_7335749996441800570_1065 at file
/Users/jasonhuang/hdfs/data/current/blk_7335749996441800570 2012-09-15
14:21:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Deleted block blk_7674314220695151815_1067 at file
/Users/jasonhuang/hdfs/data/current/blk_7674314220695151815 2012-09-15
14:29:48,016 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
succeeded for blk_5700986404331589806_1038

Not sure where I should go next. Hope to get some help.

I didn't  weak any checksum size related configs in my config files -
actually I am not even aware of the ability to do in config files.

thanks!

Jason


On Fri, Sep 14, 2012 at 10:05 PM, Harsh J <ha...@cloudera.com> wrote:
> Hi Jason,
>
> Does the DN log have something in it that corresponds to these errors?
> Is there also some stacktrace/further text after the line you've
> pasted until? Can we have it?
>
> Also, did you tweak any checksum size related configs in your config files?
>
> On Sat, Sep 15, 2012 at 3:20 AM, Jason Huang <ja...@icare.com> wrote:
>> Hello,
>>
>> Looking for some help in setting up hadoop 1.0.3 in Pseudo distributed mode...
>>
>> I was able to install hadoop, config the .xml files and start all nodes:
>> $ JPS
>> 6645 Jps
>> 6030 SecondaryNameNode
>> 6185 TaskTracker
>> 5851 NameNode
>> 6095 JobTracker
>> 5939 DataNode
>>
>> However, when I tried to play around with a couple of Map-reduce jobs
>> with provided example jar files I got the following errors:
>>
>> (1) $ bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
>> Number of Maps  = 10
>> Samples per Map = 100
>> Wrote input for Map #0
>> Wrote input for Map #1
>> Wrote input for Map #2
>> Wrote input for Map #3
>> Wrote input for Map #4
>> Wrote input for Map #5
>> Wrote input for Map #6
>> Wrote input for Map #7
>> Wrote input for Map #8
>> Wrote input for Map #9
>> Starting Job
>> 12/09/14 17:39:06 INFO mapred.FileInputFormat: Total input paths to process : 10
>> 12/09/14 17:39:06 INFO mapred.JobClient: Running job: job_201209141701_0003
>> 12/09/14 17:39:07 INFO mapred.JobClient:  map 0% reduce 0%
>> 12/09/14 17:39:16 INFO mapred.JobClient: Task Id :
>> attempt_201209141701_0003_m_000011_0, Status : FAILED
>> Error initializing attempt_201209141701_0003_m_000011_0:
>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>>
>> (2) $ ./bin/hadoop jar hadoop-examples-1.0.3.jar wordcount
>> /user/jasonhuang/input /user/jasonhuang/output
>> 12/09/14 17:37:51 INFO input.FileInputFormat: Total input paths to process : 1
>> 12/09/14 17:37:51 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes
>> where applicable
>> 12/09/14 17:37:51 WARN snappy.LoadSnappy: Snappy native library not loaded
>> 12/09/14 17:37:57 INFO mapred.JobClient: Cleaning up the staging area
>> hdfs://localhost:9000/tmp/hadoop-jasonhuang/mapred/staging/jasonhuang/.staging/job_201209141701_0002
>> 12/09/14 17:37:57 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:jasonhuang
>> cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> java.io.IOException: BlockReader: error in packet header(chunkOffset :
>> 19968, dataLen : 1835351087, seqno : 7023413562532324724 (last: 0))
>>
>> Does anyone have idea on why the error occurs and how I can fix them?
>>
>> thanks!
>>
>> Jason
>
>
>
> --
> Harsh J

Re: HDFS Error - BlockReader: error in packet header

Posted by Harsh J <ha...@cloudera.com>.
Hi Jason,

Does the DN log have something in it that corresponds to these errors?
Is there also some stacktrace/further text after the line you've
pasted until? Can we have it?

Also, did you tweak any checksum size related configs in your config files?

On Sat, Sep 15, 2012 at 3:20 AM, Jason Huang <ja...@icare.com> wrote:
> Hello,
>
> Looking for some help in setting up hadoop 1.0.3 in Pseudo distributed mode...
>
> I was able to install hadoop, config the .xml files and start all nodes:
> $ JPS
> 6645 Jps
> 6030 SecondaryNameNode
> 6185 TaskTracker
> 5851 NameNode
> 6095 JobTracker
> 5939 DataNode
>
> However, when I tried to play around with a couple of Map-reduce jobs
> with provided example jar files I got the following errors:
>
> (1) $ bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
> Number of Maps  = 10
> Samples per Map = 100
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Starting Job
> 12/09/14 17:39:06 INFO mapred.FileInputFormat: Total input paths to process : 10
> 12/09/14 17:39:06 INFO mapred.JobClient: Running job: job_201209141701_0003
> 12/09/14 17:39:07 INFO mapred.JobClient:  map 0% reduce 0%
> 12/09/14 17:39:16 INFO mapred.JobClient: Task Id :
> attempt_201209141701_0003_m_000011_0, Status : FAILED
> Error initializing attempt_201209141701_0003_m_000011_0:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>
> (2) $ ./bin/hadoop jar hadoop-examples-1.0.3.jar wordcount
> /user/jasonhuang/input /user/jasonhuang/output
> 12/09/14 17:37:51 INFO input.FileInputFormat: Total input paths to process : 1
> 12/09/14 17:37:51 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes
> where applicable
> 12/09/14 17:37:51 WARN snappy.LoadSnappy: Snappy native library not loaded
> 12/09/14 17:37:57 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://localhost:9000/tmp/hadoop-jasonhuang/mapred/staging/jasonhuang/.staging/job_201209141701_0002
> 12/09/14 17:37:57 ERROR security.UserGroupInformation:
> PriviledgedActionException as:jasonhuang
> cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 19968, dataLen : 1835351087, seqno : 7023413562532324724 (last: 0))
>
> Does anyone have idea on why the error occurs and how I can fix them?
>
> thanks!
>
> Jason



-- 
Harsh J

Re: HDFS Error - BlockReader: error in packet header

Posted by Harsh J <ha...@cloudera.com>.
Hi Jason,

Does the DN log have something in it that corresponds to these errors?
Is there also some stacktrace/further text after the line you've
pasted until? Can we have it?

Also, did you tweak any checksum size related configs in your config files?

On Sat, Sep 15, 2012 at 3:20 AM, Jason Huang <ja...@icare.com> wrote:
> Hello,
>
> Looking for some help in setting up hadoop 1.0.3 in Pseudo distributed mode...
>
> I was able to install hadoop, config the .xml files and start all nodes:
> $ JPS
> 6645 Jps
> 6030 SecondaryNameNode
> 6185 TaskTracker
> 5851 NameNode
> 6095 JobTracker
> 5939 DataNode
>
> However, when I tried to play around with a couple of Map-reduce jobs
> with provided example jar files I got the following errors:
>
> (1) $ bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
> Number of Maps  = 10
> Samples per Map = 100
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Starting Job
> 12/09/14 17:39:06 INFO mapred.FileInputFormat: Total input paths to process : 10
> 12/09/14 17:39:06 INFO mapred.JobClient: Running job: job_201209141701_0003
> 12/09/14 17:39:07 INFO mapred.JobClient:  map 0% reduce 0%
> 12/09/14 17:39:16 INFO mapred.JobClient: Task Id :
> attempt_201209141701_0003_m_000011_0, Status : FAILED
> Error initializing attempt_201209141701_0003_m_000011_0:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>
> (2) $ ./bin/hadoop jar hadoop-examples-1.0.3.jar wordcount
> /user/jasonhuang/input /user/jasonhuang/output
> 12/09/14 17:37:51 INFO input.FileInputFormat: Total input paths to process : 1
> 12/09/14 17:37:51 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes
> where applicable
> 12/09/14 17:37:51 WARN snappy.LoadSnappy: Snappy native library not loaded
> 12/09/14 17:37:57 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://localhost:9000/tmp/hadoop-jasonhuang/mapred/staging/jasonhuang/.staging/job_201209141701_0002
> 12/09/14 17:37:57 ERROR security.UserGroupInformation:
> PriviledgedActionException as:jasonhuang
> cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 19968, dataLen : 1835351087, seqno : 7023413562532324724 (last: 0))
>
> Does anyone have idea on why the error occurs and how I can fix them?
>
> thanks!
>
> Jason



-- 
Harsh J

Re: HDFS Error - BlockReader: error in packet header

Posted by Harsh J <ha...@cloudera.com>.
Hi Jason,

Does the DN log have something in it that corresponds to these errors?
Is there also some stacktrace/further text after the line you've
pasted until? Can we have it?

Also, did you tweak any checksum size related configs in your config files?

On Sat, Sep 15, 2012 at 3:20 AM, Jason Huang <ja...@icare.com> wrote:
> Hello,
>
> Looking for some help in setting up hadoop 1.0.3 in Pseudo distributed mode...
>
> I was able to install hadoop, config the .xml files and start all nodes:
> $ JPS
> 6645 Jps
> 6030 SecondaryNameNode
> 6185 TaskTracker
> 5851 NameNode
> 6095 JobTracker
> 5939 DataNode
>
> However, when I tried to play around with a couple of Map-reduce jobs
> with provided example jar files I got the following errors:
>
> (1) $ bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
> Number of Maps  = 10
> Samples per Map = 100
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Starting Job
> 12/09/14 17:39:06 INFO mapred.FileInputFormat: Total input paths to process : 10
> 12/09/14 17:39:06 INFO mapred.JobClient: Running job: job_201209141701_0003
> 12/09/14 17:39:07 INFO mapred.JobClient:  map 0% reduce 0%
> 12/09/14 17:39:16 INFO mapred.JobClient: Task Id :
> attempt_201209141701_0003_m_000011_0, Status : FAILED
> Error initializing attempt_201209141701_0003_m_000011_0:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>
> (2) $ ./bin/hadoop jar hadoop-examples-1.0.3.jar wordcount
> /user/jasonhuang/input /user/jasonhuang/output
> 12/09/14 17:37:51 INFO input.FileInputFormat: Total input paths to process : 1
> 12/09/14 17:37:51 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes
> where applicable
> 12/09/14 17:37:51 WARN snappy.LoadSnappy: Snappy native library not loaded
> 12/09/14 17:37:57 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://localhost:9000/tmp/hadoop-jasonhuang/mapred/staging/jasonhuang/.staging/job_201209141701_0002
> 12/09/14 17:37:57 ERROR security.UserGroupInformation:
> PriviledgedActionException as:jasonhuang
> cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 19968, dataLen : 1835351087, seqno : 7023413562532324724 (last: 0))
>
> Does anyone have idea on why the error occurs and how I can fix them?
>
> thanks!
>
> Jason



-- 
Harsh J

Re: HDFS Error - BlockReader: error in packet header

Posted by Harsh J <ha...@cloudera.com>.
Hi Jason,

Does the DN log have something in it that corresponds to these errors?
Is there also some stacktrace/further text after the line you've
pasted until? Can we have it?

Also, did you tweak any checksum size related configs in your config files?

On Sat, Sep 15, 2012 at 3:20 AM, Jason Huang <ja...@icare.com> wrote:
> Hello,
>
> Looking for some help in setting up hadoop 1.0.3 in Pseudo distributed mode...
>
> I was able to install hadoop, config the .xml files and start all nodes:
> $ JPS
> 6645 Jps
> 6030 SecondaryNameNode
> 6185 TaskTracker
> 5851 NameNode
> 6095 JobTracker
> 5939 DataNode
>
> However, when I tried to play around with a couple of Map-reduce jobs
> with provided example jar files I got the following errors:
>
> (1) $ bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100
> Number of Maps  = 10
> Samples per Map = 100
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Starting Job
> 12/09/14 17:39:06 INFO mapred.FileInputFormat: Total input paths to process : 10
> 12/09/14 17:39:06 INFO mapred.JobClient: Running job: job_201209141701_0003
> 12/09/14 17:39:07 INFO mapred.JobClient:  map 0% reduce 0%
> 12/09/14 17:39:16 INFO mapred.JobClient: Task Id :
> attempt_201209141701_0003_m_000011_0, Status : FAILED
> Error initializing attempt_201209141701_0003_m_000011_0:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0))
>         at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1580)
>
> (2) $ ./bin/hadoop jar hadoop-examples-1.0.3.jar wordcount
> /user/jasonhuang/input /user/jasonhuang/output
> 12/09/14 17:37:51 INFO input.FileInputFormat: Total input paths to process : 1
> 12/09/14 17:37:51 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes
> where applicable
> 12/09/14 17:37:51 WARN snappy.LoadSnappy: Snappy native library not loaded
> 12/09/14 17:37:57 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://localhost:9000/tmp/hadoop-jasonhuang/mapred/staging/jasonhuang/.staging/job_201209141701_0002
> 12/09/14 17:37:57 ERROR security.UserGroupInformation:
> PriviledgedActionException as:jasonhuang
> cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> java.io.IOException: BlockReader: error in packet header(chunkOffset :
> 19968, dataLen : 1835351087, seqno : 7023413562532324724 (last: 0))
>
> Does anyone have idea on why the error occurs and how I can fix them?
>
> thanks!
>
> Jason



-- 
Harsh J