You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Espen Amble Kolstad (JIRA)" <ji...@apache.org> on 2007/03/05 16:48:51 UTC
[jira] Created: (HADOOP-1062) Checksum error in InMemoryFileSystem
Checksum error in InMemoryFileSystem
------------------------------------
Key: HADOOP-1062
URL: https://issues.apache.org/jira/browse/HADOOP-1062
Project: Hadoop
Issue Type: Bug
Components: mapred
Affects Versions: 0.12.0, 0.12.1
Reporter: Espen Amble Kolstad
Getting the following error in the tasktracker log on 2 attempts:
2007-03-05 14:59:50,320 WARN mapred.TaskRunner - task_0001_r_000005_0 Intermediate Merge of the inmemory files threw an exception: org.apache.hadoop.fs.ChecksumException: Checksum error: /trank/n
utch-0.9-dev/filesystem/mapred/local/task_0001_r_000005_0/map_2.out at 16776192
at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.verifySum(ChecksumFileSystem.java:250)
at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.readBuffer(ChecksumFileSystem.java:207)
at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.read(ChecksumFileSystem.java:163)
at org.apache.hadoop.fs.FSDataInputStream$PositionCache.read(FSDataInputStream.java:41)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at java.io.DataInputStream.readFully(DataInputStream.java:178)
at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:57)
at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:91)
at org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:1300)
at org.apache.hadoop.io.SequenceFile$Reader.seekToCurrentValue(SequenceFile.java:1363)
at org.apache.hadoop.io.SequenceFile$Reader.nextRawValue(SequenceFile.java:1656)
at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawValue(SequenceFile.java:2579)
at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.next(SequenceFile.java:2351)
at org.apache.hadoop.io.SequenceFile$Sorter.writeFile(SequenceFile.java:2226)
at org.apache.hadoop.mapred.ReduceTaskRunner$InMemFSMergeThread.run(ReduceTaskRunner.java:820)
When I changed fs.inmemory.size.mb to 0 (was 75 - default) the reduce completes successfully.
Could it be related to HADOOP-1027 or HADOOP-1014?
- Espen
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-1062) Checksum error in
InMemoryFileSystem
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12478877 ]
Hairong Kuang commented on HADOOP-1062:
---------------------------------------
Hi Espen, is it possible that you send me a test case that can consistently reproduce this error? Thanks.
> Checksum error in InMemoryFileSystem
> ------------------------------------
>
> Key: HADOOP-1062
> URL: https://issues.apache.org/jira/browse/HADOOP-1062
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.12.1
> Reporter: Espen Amble Kolstad
> Priority: Blocker
> Fix For: 0.12.1
>
>
> Getting the following error in the tasktracker log on 2 attempts:
> 2007-03-05 14:59:50,320 WARN mapred.TaskRunner - task_0001_r_000005_0 Intermediate Merge of the inmemory files threw an exception: org.apache.hadoop.fs.ChecksumException: Checksum error: /trank/n
> utch-0.9-dev/filesystem/mapred/local/task_0001_r_000005_0/map_2.out at 16776192
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.verifySum(ChecksumFileSystem.java:250)
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.readBuffer(ChecksumFileSystem.java:207)
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.read(ChecksumFileSystem.java:163)
> at org.apache.hadoop.fs.FSDataInputStream$PositionCache.read(FSDataInputStream.java:41)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> at java.io.DataInputStream.readFully(DataInputStream.java:178)
> at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:57)
> at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:91)
> at org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:1300)
> at org.apache.hadoop.io.SequenceFile$Reader.seekToCurrentValue(SequenceFile.java:1363)
> at org.apache.hadoop.io.SequenceFile$Reader.nextRawValue(SequenceFile.java:1656)
> at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawValue(SequenceFile.java:2579)
> at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.next(SequenceFile.java:2351)
> at org.apache.hadoop.io.SequenceFile$Sorter.writeFile(SequenceFile.java:2226)
> at org.apache.hadoop.mapred.ReduceTaskRunner$InMemFSMergeThread.run(ReduceTaskRunner.java:820)
> When I changed fs.inmemory.size.mb to 0 (was 75 - default) the reduce completes successfully.
> Could it be related to HADOOP-1027 or HADOOP-1014?
> - Espen
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-1062) Checksum error in
InMemoryFileSystem
Posted by "Devaraj Das (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12478089 ]
Devaraj Das commented on HADOOP-1062:
-------------------------------------
I think this has something to do with the changes that ChecksumFileSystem (HADOOP-928) introduced in the InMemoryFileSystem or something related to that. Need to take a closer look at the interaction of the InMemoryFileSystem and the ChecksumFileSystem.
> Checksum error in InMemoryFileSystem
> ------------------------------------
>
> Key: HADOOP-1062
> URL: https://issues.apache.org/jira/browse/HADOOP-1062
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.12.0, 0.12.1
> Reporter: Espen Amble Kolstad
>
> Getting the following error in the tasktracker log on 2 attempts:
> 2007-03-05 14:59:50,320 WARN mapred.TaskRunner - task_0001_r_000005_0 Intermediate Merge of the inmemory files threw an exception: org.apache.hadoop.fs.ChecksumException: Checksum error: /trank/n
> utch-0.9-dev/filesystem/mapred/local/task_0001_r_000005_0/map_2.out at 16776192
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.verifySum(ChecksumFileSystem.java:250)
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.readBuffer(ChecksumFileSystem.java:207)
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.read(ChecksumFileSystem.java:163)
> at org.apache.hadoop.fs.FSDataInputStream$PositionCache.read(FSDataInputStream.java:41)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> at java.io.DataInputStream.readFully(DataInputStream.java:178)
> at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:57)
> at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:91)
> at org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:1300)
> at org.apache.hadoop.io.SequenceFile$Reader.seekToCurrentValue(SequenceFile.java:1363)
> at org.apache.hadoop.io.SequenceFile$Reader.nextRawValue(SequenceFile.java:1656)
> at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawValue(SequenceFile.java:2579)
> at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.next(SequenceFile.java:2351)
> at org.apache.hadoop.io.SequenceFile$Sorter.writeFile(SequenceFile.java:2226)
> at org.apache.hadoop.mapred.ReduceTaskRunner$InMemFSMergeThread.run(ReduceTaskRunner.java:820)
> When I changed fs.inmemory.size.mb to 0 (was 75 - default) the reduce completes successfully.
> Could it be related to HADOOP-1027 or HADOOP-1014?
> - Espen
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-1062) Checksum error in
InMemoryFileSystem
Posted by "Doug Cutting (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12478973 ]
Doug Cutting commented on HADOOP-1062:
--------------------------------------
Could this in fact be caused by a machine w/o ECC memory? The Internet Archive had lots of problems in sort when it had a bad batch of memory.
> Checksum error in InMemoryFileSystem
> ------------------------------------
>
> Key: HADOOP-1062
> URL: https://issues.apache.org/jira/browse/HADOOP-1062
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.12.1
> Reporter: Espen Amble Kolstad
> Priority: Blocker
> Fix For: 0.12.1
>
>
> Getting the following error in the tasktracker log on 2 attempts:
> 2007-03-05 14:59:50,320 WARN mapred.TaskRunner - task_0001_r_000005_0 Intermediate Merge of the inmemory files threw an exception: org.apache.hadoop.fs.ChecksumException: Checksum error: /trank/n
> utch-0.9-dev/filesystem/mapred/local/task_0001_r_000005_0/map_2.out at 16776192
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.verifySum(ChecksumFileSystem.java:250)
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.readBuffer(ChecksumFileSystem.java:207)
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.read(ChecksumFileSystem.java:163)
> at org.apache.hadoop.fs.FSDataInputStream$PositionCache.read(FSDataInputStream.java:41)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> at java.io.DataInputStream.readFully(DataInputStream.java:178)
> at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:57)
> at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:91)
> at org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:1300)
> at org.apache.hadoop.io.SequenceFile$Reader.seekToCurrentValue(SequenceFile.java:1363)
> at org.apache.hadoop.io.SequenceFile$Reader.nextRawValue(SequenceFile.java:1656)
> at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawValue(SequenceFile.java:2579)
> at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.next(SequenceFile.java:2351)
> at org.apache.hadoop.io.SequenceFile$Sorter.writeFile(SequenceFile.java:2226)
> at org.apache.hadoop.mapred.ReduceTaskRunner$InMemFSMergeThread.run(ReduceTaskRunner.java:820)
> When I changed fs.inmemory.size.mb to 0 (was 75 - default) the reduce completes successfully.
> Could it be related to HADOOP-1027 or HADOOP-1014?
> - Espen
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-1062) Checksum error in InMemoryFileSystem
Posted by "Nigel Daley (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Nigel Daley updated HADOOP-1062:
--------------------------------
Fix Version/s: 0.12.1
Priority: Blocker (was: Major)
Affects Version/s: (was: 0.12.0)
This should be investigated prior to 0.12.1 release.
> Checksum error in InMemoryFileSystem
> ------------------------------------
>
> Key: HADOOP-1062
> URL: https://issues.apache.org/jira/browse/HADOOP-1062
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.12.1
> Reporter: Espen Amble Kolstad
> Priority: Blocker
> Fix For: 0.12.1
>
>
> Getting the following error in the tasktracker log on 2 attempts:
> 2007-03-05 14:59:50,320 WARN mapred.TaskRunner - task_0001_r_000005_0 Intermediate Merge of the inmemory files threw an exception: org.apache.hadoop.fs.ChecksumException: Checksum error: /trank/n
> utch-0.9-dev/filesystem/mapred/local/task_0001_r_000005_0/map_2.out at 16776192
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.verifySum(ChecksumFileSystem.java:250)
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.readBuffer(ChecksumFileSystem.java:207)
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.read(ChecksumFileSystem.java:163)
> at org.apache.hadoop.fs.FSDataInputStream$PositionCache.read(FSDataInputStream.java:41)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> at java.io.DataInputStream.readFully(DataInputStream.java:178)
> at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:57)
> at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:91)
> at org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:1300)
> at org.apache.hadoop.io.SequenceFile$Reader.seekToCurrentValue(SequenceFile.java:1363)
> at org.apache.hadoop.io.SequenceFile$Reader.nextRawValue(SequenceFile.java:1656)
> at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawValue(SequenceFile.java:2579)
> at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.next(SequenceFile.java:2351)
> at org.apache.hadoop.io.SequenceFile$Sorter.writeFile(SequenceFile.java:2226)
> at org.apache.hadoop.mapred.ReduceTaskRunner$InMemFSMergeThread.run(ReduceTaskRunner.java:820)
> When I changed fs.inmemory.size.mb to 0 (was 75 - default) the reduce completes successfully.
> Could it be related to HADOOP-1027 or HADOOP-1014?
> - Espen
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-1062) Checksum error in
InMemoryFileSystem
Posted by "Espen Amble Kolstad (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12478080 ]
Espen Amble Kolstad commented on HADOOP-1062:
---------------------------------------------
On 2 attempt I got this addititonal error:
2007-03-05 16:05:57,883 WARN mapred.TaskRunner - task_0001_r_000005_1 Final merge of the inmemory files threw an exception: org.apache.hadoop.fs.ChecksumException: Checksum error: /trank/nutch-0.9-dev/filesystem/mapred/local/task_0001_r_000005_1/map_2.out at 16776192
at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.verifySum(ChecksumFileSystem.java:250)
at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.readBuffer(ChecksumFileSystem.java:207)
at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.read(ChecksumFileSystem.java:163)
at org.apache.hadoop.fs.FSDataInputStream$PositionCache.read(FSDataInputStream.java:41)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at java.io.DataInputStream.readFully(DataInputStream.java:178)
at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:57)
at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:91)
at org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:1300)
at org.apache.hadoop.io.SequenceFile$Reader.seekToCurrentValue(SequenceFile.java:1363)
at org.apache.hadoop.io.SequenceFile$Reader.nextRawValue(SequenceFile.java:1656)
at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawValue(SequenceFile.java:2579)
at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.next(SequenceFile.java:2351)
at org.apache.hadoop.io.SequenceFile$Sorter.writeFile(SequenceFile.java:2226)
at org.apache.hadoop.mapred.ReduceTaskRunner$InMemFSMergeThread.run(ReduceTaskRunner.java:820)
> Checksum error in InMemoryFileSystem
> ------------------------------------
>
> Key: HADOOP-1062
> URL: https://issues.apache.org/jira/browse/HADOOP-1062
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.12.0, 0.12.1
> Reporter: Espen Amble Kolstad
>
> Getting the following error in the tasktracker log on 2 attempts:
> 2007-03-05 14:59:50,320 WARN mapred.TaskRunner - task_0001_r_000005_0 Intermediate Merge of the inmemory files threw an exception: org.apache.hadoop.fs.ChecksumException: Checksum error: /trank/n
> utch-0.9-dev/filesystem/mapred/local/task_0001_r_000005_0/map_2.out at 16776192
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.verifySum(ChecksumFileSystem.java:250)
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.readBuffer(ChecksumFileSystem.java:207)
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.read(ChecksumFileSystem.java:163)
> at org.apache.hadoop.fs.FSDataInputStream$PositionCache.read(FSDataInputStream.java:41)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> at java.io.DataInputStream.readFully(DataInputStream.java:178)
> at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:57)
> at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:91)
> at org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:1300)
> at org.apache.hadoop.io.SequenceFile$Reader.seekToCurrentValue(SequenceFile.java:1363)
> at org.apache.hadoop.io.SequenceFile$Reader.nextRawValue(SequenceFile.java:1656)
> at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawValue(SequenceFile.java:2579)
> at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.next(SequenceFile.java:2351)
> at org.apache.hadoop.io.SequenceFile$Sorter.writeFile(SequenceFile.java:2226)
> at org.apache.hadoop.mapred.ReduceTaskRunner$InMemFSMergeThread.run(ReduceTaskRunner.java:820)
> When I changed fs.inmemory.size.mb to 0 (was 75 - default) the reduce completes successfully.
> Could it be related to HADOOP-1027 or HADOOP-1014?
> - Espen
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HADOOP-1062) Checksum error in InMemoryFileSystem
Posted by "Espen Amble Kolstad (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Espen Amble Kolstad resolved HADOOP-1062.
-----------------------------------------
Resolution: Cannot Reproduce
Fix Version/s: (was: 0.13.0)
I haven't been able to reproduce this error even on the same hardware.
> Checksum error in InMemoryFileSystem
> ------------------------------------
>
> Key: HADOOP-1062
> URL: https://issues.apache.org/jira/browse/HADOOP-1062
> Project: Hadoop
> Issue Type: Bug
> Components: fs
> Affects Versions: 0.12.0
> Reporter: Espen Amble Kolstad
> Assigned To: Hairong Kuang
>
> Getting the following error in the tasktracker log on 2 attempts:
> 2007-03-05 14:59:50,320 WARN mapred.TaskRunner - task_0001_r_000005_0 Intermediate Merge of the inmemory files threw an exception: org.apache.hadoop.fs.ChecksumException: Checksum error: /trank/n
> utch-0.9-dev/filesystem/mapred/local/task_0001_r_000005_0/map_2.out at 16776192
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.verifySum(ChecksumFileSystem.java:250)
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.readBuffer(ChecksumFileSystem.java:207)
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.read(ChecksumFileSystem.java:163)
> at org.apache.hadoop.fs.FSDataInputStream$PositionCache.read(FSDataInputStream.java:41)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> at java.io.DataInputStream.readFully(DataInputStream.java:178)
> at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:57)
> at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:91)
> at org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:1300)
> at org.apache.hadoop.io.SequenceFile$Reader.seekToCurrentValue(SequenceFile.java:1363)
> at org.apache.hadoop.io.SequenceFile$Reader.nextRawValue(SequenceFile.java:1656)
> at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawValue(SequenceFile.java:2579)
> at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.next(SequenceFile.java:2351)
> at org.apache.hadoop.io.SequenceFile$Sorter.writeFile(SequenceFile.java:2226)
> at org.apache.hadoop.mapred.ReduceTaskRunner$InMemFSMergeThread.run(ReduceTaskRunner.java:820)
> When I changed fs.inmemory.size.mb to 0 (was 75 - default) the reduce completes successfully.
> Could it be related to HADOOP-1027 or HADOOP-1014?
> - Espen
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HADOOP-1062) Checksum error in InMemoryFileSystem
Posted by "Hairong Kuang (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-1062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-1062:
----------------------------------
Component/s: (was: mapred)
fs
Fix Version/s: (was: 0.12.1)
0.13.0
Assignee: Hairong Kuang
Priority: Major (was: Blocker)
Affects Version/s: (was: 0.12.1)
0.12.0
I looked at this issue, but I am not able to reproduce the error. I would suggest that we fix it in 0.13.0 when we get more inputs from the reporter.
> Checksum error in InMemoryFileSystem
> ------------------------------------
>
> Key: HADOOP-1062
> URL: https://issues.apache.org/jira/browse/HADOOP-1062
> Project: Hadoop
> Issue Type: Bug
> Components: fs
> Affects Versions: 0.12.0
> Reporter: Espen Amble Kolstad
> Assigned To: Hairong Kuang
> Fix For: 0.13.0
>
>
> Getting the following error in the tasktracker log on 2 attempts:
> 2007-03-05 14:59:50,320 WARN mapred.TaskRunner - task_0001_r_000005_0 Intermediate Merge of the inmemory files threw an exception: org.apache.hadoop.fs.ChecksumException: Checksum error: /trank/n
> utch-0.9-dev/filesystem/mapred/local/task_0001_r_000005_0/map_2.out at 16776192
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.verifySum(ChecksumFileSystem.java:250)
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.readBuffer(ChecksumFileSystem.java:207)
> at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.read(ChecksumFileSystem.java:163)
> at org.apache.hadoop.fs.FSDataInputStream$PositionCache.read(FSDataInputStream.java:41)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> at java.io.DataInputStream.readFully(DataInputStream.java:178)
> at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:57)
> at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:91)
> at org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:1300)
> at org.apache.hadoop.io.SequenceFile$Reader.seekToCurrentValue(SequenceFile.java:1363)
> at org.apache.hadoop.io.SequenceFile$Reader.nextRawValue(SequenceFile.java:1656)
> at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawValue(SequenceFile.java:2579)
> at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.next(SequenceFile.java:2351)
> at org.apache.hadoop.io.SequenceFile$Sorter.writeFile(SequenceFile.java:2226)
> at org.apache.hadoop.mapred.ReduceTaskRunner$InMemFSMergeThread.run(ReduceTaskRunner.java:820)
> When I changed fs.inmemory.size.mb to 0 (was 75 - default) the reduce completes successfully.
> Could it be related to HADOOP-1027 or HADOOP-1014?
> - Espen
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.