You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by "Palleti, Pallavi" <pa...@corp.aol.com> on 2008/11/27 19:08:30 UTC

Error with Sequence File in hadoop-18

Hi,

 I am getting "Check sum ok was sent" errors when I am using hadoop. Can
someone please let me know why this error is coming and how to avoid it.

It was running perfectly fine when I used hadoop-17. And, this error is
coming when I upgraded the system to hadoop-18.2.

 

The full stack trace is:

08/11/27 13:02:58 INFO fs.FSInputChecker: java.io.IOException: Checksum
ok was sent and should not be sent again

        at
org.apache.hadoop.dfs.DFSClient$BlockReader.read(DFSClient.java:863)

        at
org.apache.hadoop.dfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java
:1392)

        at
org.apache.hadoop.dfs.DFSClient$DFSInputStream.read(DFSClient.java:1428)

        at
org.apache.hadoop.dfs.DFSClient$DFSInputStream.read(DFSClient.java:1377)

        at java.io.DataInputStream.readByte(DataInputStream.java:248)

        at
org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:324)

        at
org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:345)

        at
org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:16
48)

        at
org.apache.hadoop.io.SequenceFile$Reader.readBlock(SequenceFile.java:168
8)

        at
org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1850)

        at
org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1879)

        at
org.apache.hadoop.io.MapFile$Reader.readIndex(MapFile.java:318)

        at
org.apache.hadoop.io.MapFile$Reader.seekInternal(MapFile.java:434)

        at
org.apache.hadoop.io.MapFile$Reader.seekInternal(MapFile.java:416)

        at org.apache.hadoop.io.MapFile$Reader.seek(MapFile.java:403)

        at org.apache.hadoop.io.MapFile$Reader.get(MapFile.java:522)

 

Thanks

Pallavi


Re: Error with Sequence File in hadoop-18

Posted by Amareshwari Sriramadasu <am...@yahoo-inc.com>.
The issue is to remove the log message. If you are ok with the log 
message, you can continue using 18.2. If not, you can apply the patch 
available on jira and rebuild, since 18.3 is not yet released.
-Amareshwari

Palleti, Pallavi wrote:
> Hi Amareshwari,
>  
>  Thanks for the reply. We recently upgraded hadoop cluster to
> hadoop-18.2. Can you please suggest a simple way of avoiding this issue.
> I mean, do we need to do the full upgrade to hadoop-0.18.3 or is there a
> simple way of taking the patch and adding it to the existing code
> repository and rebuild? 
>
> Thanks
> Pallavi
>
> -----Original Message-----
> From: Amareshwari Sriramadasu [mailto:amarsri@yahoo-inc.com] 
> Sent: Friday, November 28, 2008 10:56 AM
> To: core-user@hadoop.apache.org
> Subject: Re: Error with Sequence File in hadoop-18
>
> It got fixed in 0.18.3 (HADOOP-4499).
>
> -Amareshwari
> Palleti, Pallavi wrote:
>   
>> Hi,
>>
>>  I am getting "Check sum ok was sent" errors when I am using hadoop.
>>     
> Can
>   
>> someone please let me know why this error is coming and how to avoid
>>     
> it.
>   
>> It was running perfectly fine when I used hadoop-17. And, this error
>>     
> is
>   
>> coming when I upgraded the system to hadoop-18.2.
>>
>>  
>>
>> The full stack trace is:
>>
>> 08/11/27 13:02:58 INFO fs.FSInputChecker: java.io.IOException:
>>     
> Checksum
>   
>> ok was sent and should not be sent again
>>
>>         at
>> org.apache.hadoop.dfs.DFSClient$BlockReader.read(DFSClient.java:863)
>>
>>         at
>>
>>     
> org.apache.hadoop.dfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java
>   
>> :1392)
>>
>>         at
>>
>>     
> org.apache.hadoop.dfs.DFSClient$DFSInputStream.read(DFSClient.java:1428)
>   
>>         at
>>
>>     
> org.apache.hadoop.dfs.DFSClient$DFSInputStream.read(DFSClient.java:1377)
>   
>>         at java.io.DataInputStream.readByte(DataInputStream.java:248)
>>
>>         at
>> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:324)
>>
>>         at
>> org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:345)
>>
>>         at
>>
>>     
> org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:16
>   
>> 48)
>>
>>         at
>>
>>     
> org.apache.hadoop.io.SequenceFile$Reader.readBlock(SequenceFile.java:168
>   
>> 8)
>>
>>         at
>> org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1850)
>>
>>         at
>> org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1879)
>>
>>         at
>> org.apache.hadoop.io.MapFile$Reader.readIndex(MapFile.java:318)
>>
>>         at
>> org.apache.hadoop.io.MapFile$Reader.seekInternal(MapFile.java:434)
>>
>>         at
>> org.apache.hadoop.io.MapFile$Reader.seekInternal(MapFile.java:416)
>>
>>         at org.apache.hadoop.io.MapFile$Reader.seek(MapFile.java:403)
>>
>>         at org.apache.hadoop.io.MapFile$Reader.get(MapFile.java:522)
>>
>>  
>>
>> Thanks
>>
>> Pallavi
>>
>>
>>   
>>     
>
>   


RE: Error with Sequence File in hadoop-18

Posted by "Palleti, Pallavi" <pa...@corp.aol.com>.
Hi Amareshwari,
 
 Thanks for the reply. We recently upgraded hadoop cluster to
hadoop-18.2. Can you please suggest a simple way of avoiding this issue.
I mean, do we need to do the full upgrade to hadoop-0.18.3 or is there a
simple way of taking the patch and adding it to the existing code
repository and rebuild? 

Thanks
Pallavi

-----Original Message-----
From: Amareshwari Sriramadasu [mailto:amarsri@yahoo-inc.com] 
Sent: Friday, November 28, 2008 10:56 AM
To: core-user@hadoop.apache.org
Subject: Re: Error with Sequence File in hadoop-18

It got fixed in 0.18.3 (HADOOP-4499).

-Amareshwari
Palleti, Pallavi wrote:
> Hi,
>
>  I am getting "Check sum ok was sent" errors when I am using hadoop.
Can
> someone please let me know why this error is coming and how to avoid
it.
>
> It was running perfectly fine when I used hadoop-17. And, this error
is
> coming when I upgraded the system to hadoop-18.2.
>
>  
>
> The full stack trace is:
>
> 08/11/27 13:02:58 INFO fs.FSInputChecker: java.io.IOException:
Checksum
> ok was sent and should not be sent again
>
>         at
> org.apache.hadoop.dfs.DFSClient$BlockReader.read(DFSClient.java:863)
>
>         at
>
org.apache.hadoop.dfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java
> :1392)
>
>         at
>
org.apache.hadoop.dfs.DFSClient$DFSInputStream.read(DFSClient.java:1428)
>
>         at
>
org.apache.hadoop.dfs.DFSClient$DFSInputStream.read(DFSClient.java:1377)
>
>         at java.io.DataInputStream.readByte(DataInputStream.java:248)
>
>         at
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:324)
>
>         at
> org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:345)
>
>         at
>
org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:16
> 48)
>
>         at
>
org.apache.hadoop.io.SequenceFile$Reader.readBlock(SequenceFile.java:168
> 8)
>
>         at
> org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1850)
>
>         at
> org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1879)
>
>         at
> org.apache.hadoop.io.MapFile$Reader.readIndex(MapFile.java:318)
>
>         at
> org.apache.hadoop.io.MapFile$Reader.seekInternal(MapFile.java:434)
>
>         at
> org.apache.hadoop.io.MapFile$Reader.seekInternal(MapFile.java:416)
>
>         at org.apache.hadoop.io.MapFile$Reader.seek(MapFile.java:403)
>
>         at org.apache.hadoop.io.MapFile$Reader.get(MapFile.java:522)
>
>  
>
> Thanks
>
> Pallavi
>
>
>   


Re: Error with Sequence File in hadoop-18

Posted by Amareshwari Sriramadasu <am...@yahoo-inc.com>.
It got fixed in 0.18.3 (HADOOP-4499).

-Amareshwari
Palleti, Pallavi wrote:
> Hi,
>
>  I am getting "Check sum ok was sent" errors when I am using hadoop. Can
> someone please let me know why this error is coming and how to avoid it.
>
> It was running perfectly fine when I used hadoop-17. And, this error is
> coming when I upgraded the system to hadoop-18.2.
>
>  
>
> The full stack trace is:
>
> 08/11/27 13:02:58 INFO fs.FSInputChecker: java.io.IOException: Checksum
> ok was sent and should not be sent again
>
>         at
> org.apache.hadoop.dfs.DFSClient$BlockReader.read(DFSClient.java:863)
>
>         at
> org.apache.hadoop.dfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java
> :1392)
>
>         at
> org.apache.hadoop.dfs.DFSClient$DFSInputStream.read(DFSClient.java:1428)
>
>         at
> org.apache.hadoop.dfs.DFSClient$DFSInputStream.read(DFSClient.java:1377)
>
>         at java.io.DataInputStream.readByte(DataInputStream.java:248)
>
>         at
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:324)
>
>         at
> org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:345)
>
>         at
> org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:16
> 48)
>
>         at
> org.apache.hadoop.io.SequenceFile$Reader.readBlock(SequenceFile.java:168
> 8)
>
>         at
> org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1850)
>
>         at
> org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1879)
>
>         at
> org.apache.hadoop.io.MapFile$Reader.readIndex(MapFile.java:318)
>
>         at
> org.apache.hadoop.io.MapFile$Reader.seekInternal(MapFile.java:434)
>
>         at
> org.apache.hadoop.io.MapFile$Reader.seekInternal(MapFile.java:416)
>
>         at org.apache.hadoop.io.MapFile$Reader.seek(MapFile.java:403)
>
>         at org.apache.hadoop.io.MapFile$Reader.get(MapFile.java:522)
>
>  
>
> Thanks
>
> Pallavi
>
>
>