You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by ch huang <ju...@gmail.com> on 2014/02/21 03:57:50 UTC

issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

hi,maillist:
          i see the following info in my hdfs log ,and the block belong to
the file which write by scribe ,i do not know why
is there any limit in hdfs system ?

2014-02-21 10:33:30,235 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
received exc
eption java.io.IOException: Replica gen stamp < block genstamp,
block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
replica=ReplicaWaitingToBeRecov
ered, blk_-8536558734938003208_3820986, RWR
  getNumBytes()     = 35840
  getBytesOnDisk()  = 35840
  getVisibleLength()= -1
  getVolume()       = /data/4/dn/current
  getBlockFile()    =
/data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
  unlinked=false
2014-02-21 10:33:30,235 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(192.168.11.12,
storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
ort=50075, ipcPort=50020,
storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
exception while serving BP-1043055049-192.168.11.11-1382442676
609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
java.io.IOException: Replica gen stamp < block genstamp,
block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
replica=ReplicaWaitingToBeRecovered, b
lk_-8536558734938003208_3820986, RWR
  getNumBytes()     = 35840
  getBytesOnDisk()  = 35840
  getVisibleLength()= -1
  getVolume()       = /data/4/dn/current
  getBlockFile()    =
/data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
  unlinked=false
        at
org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
        at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
        at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
        at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
        at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
        at java.lang.Thread.run(Thread.java:744)
2014-02-21 10:33:30,236 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
192.168.11.12:50010
java.io.IOException: Replica gen stamp < block genstamp,
block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
  getNumBytes()     = 35840
  getBytesOnDisk()  = 35840
  getVisibleLength()= -1
  getVolume()       = /data/4/dn/current
  getBlockFile()    =
/data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
  unlinked=false
        at
org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
        at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
        at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
        at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
        at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
        at java.lang.Thread.run(Thread.java:744)

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
hi, i use CDH4.4

On Fri, Feb 21, 2014 at 12:04 PM, Ted Yu <yu...@gmail.com> wrote:

> Which hadoop release are you using ?
>
> Cheers
>
>
> On Thu, Feb 20, 2014 at 8:57 PM, ch huang <ju...@gmail.com> wrote:
>
>>  hi,maillist:
>>           i see the following info in my hdfs log ,and the block belong
>> to the file which write by scribe ,i do not know why
>> is there any limit in hdfs system ?
>>
>> 2014-02-21 10:33:30,235 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
>> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
>> received exc
>> eption java.io.IOException: Replica gen stamp < block genstamp,
>> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
>> replica=ReplicaWaitingToBeRecov
>> ered, blk_-8536558734938003208_3820986, RWR
>>   getNumBytes()     = 35840
>>   getBytesOnDisk()  = 35840
>>   getVisibleLength()= -1
>>   getVolume()       = /data/4/dn/current
>>   getBlockFile()    =
>> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>>   unlinked=false
>> 2014-02-21 10:33:30,235 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(192.168.11.12,
>> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
>> ort=50075, ipcPort=50020,
>> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
>> exception while serving BP-1043055049-192.168.11.11-1382442676
>> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
>> java.io.IOException: Replica gen stamp < block genstamp,
>> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
>> replica=ReplicaWaitingToBeRecovered, b
>> lk_-8536558734938003208_3820986, RWR
>>   getNumBytes()     = 35840
>>   getBytesOnDisk()  = 35840
>>   getVisibleLength()= -1
>>   getVolume()       = /data/4/dn/current
>>   getBlockFile()    =
>> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>>   unlinked=false
>>         at
>> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>>         at java.lang.Thread.run(Thread.java:744)
>> 2014-02-21 10:33:30,236 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
>> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
>> 192.168.11.12:50010
>> java.io.IOException: Replica gen stamp < block genstamp,
>> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
>> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>>   getNumBytes()     = 35840
>>   getBytesOnDisk()  = 35840
>>   getVisibleLength()= -1
>>   getVolume()       = /data/4/dn/current
>>   getBlockFile()    =
>> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>>   unlinked=false
>>         at
>> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>>         at java.lang.Thread.run(Thread.java:744)
>>
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
hi, i use CDH4.4

On Fri, Feb 21, 2014 at 12:04 PM, Ted Yu <yu...@gmail.com> wrote:

> Which hadoop release are you using ?
>
> Cheers
>
>
> On Thu, Feb 20, 2014 at 8:57 PM, ch huang <ju...@gmail.com> wrote:
>
>>  hi,maillist:
>>           i see the following info in my hdfs log ,and the block belong
>> to the file which write by scribe ,i do not know why
>> is there any limit in hdfs system ?
>>
>> 2014-02-21 10:33:30,235 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
>> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
>> received exc
>> eption java.io.IOException: Replica gen stamp < block genstamp,
>> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
>> replica=ReplicaWaitingToBeRecov
>> ered, blk_-8536558734938003208_3820986, RWR
>>   getNumBytes()     = 35840
>>   getBytesOnDisk()  = 35840
>>   getVisibleLength()= -1
>>   getVolume()       = /data/4/dn/current
>>   getBlockFile()    =
>> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>>   unlinked=false
>> 2014-02-21 10:33:30,235 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(192.168.11.12,
>> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
>> ort=50075, ipcPort=50020,
>> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
>> exception while serving BP-1043055049-192.168.11.11-1382442676
>> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
>> java.io.IOException: Replica gen stamp < block genstamp,
>> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
>> replica=ReplicaWaitingToBeRecovered, b
>> lk_-8536558734938003208_3820986, RWR
>>   getNumBytes()     = 35840
>>   getBytesOnDisk()  = 35840
>>   getVisibleLength()= -1
>>   getVolume()       = /data/4/dn/current
>>   getBlockFile()    =
>> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>>   unlinked=false
>>         at
>> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>>         at java.lang.Thread.run(Thread.java:744)
>> 2014-02-21 10:33:30,236 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
>> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
>> 192.168.11.12:50010
>> java.io.IOException: Replica gen stamp < block genstamp,
>> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
>> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>>   getNumBytes()     = 35840
>>   getBytesOnDisk()  = 35840
>>   getVisibleLength()= -1
>>   getVolume()       = /data/4/dn/current
>>   getBlockFile()    =
>> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>>   unlinked=false
>>         at
>> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>>         at java.lang.Thread.run(Thread.java:744)
>>
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
hi, i use CDH4.4

On Fri, Feb 21, 2014 at 12:04 PM, Ted Yu <yu...@gmail.com> wrote:

> Which hadoop release are you using ?
>
> Cheers
>
>
> On Thu, Feb 20, 2014 at 8:57 PM, ch huang <ju...@gmail.com> wrote:
>
>>  hi,maillist:
>>           i see the following info in my hdfs log ,and the block belong
>> to the file which write by scribe ,i do not know why
>> is there any limit in hdfs system ?
>>
>> 2014-02-21 10:33:30,235 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
>> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
>> received exc
>> eption java.io.IOException: Replica gen stamp < block genstamp,
>> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
>> replica=ReplicaWaitingToBeRecov
>> ered, blk_-8536558734938003208_3820986, RWR
>>   getNumBytes()     = 35840
>>   getBytesOnDisk()  = 35840
>>   getVisibleLength()= -1
>>   getVolume()       = /data/4/dn/current
>>   getBlockFile()    =
>> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>>   unlinked=false
>> 2014-02-21 10:33:30,235 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(192.168.11.12,
>> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
>> ort=50075, ipcPort=50020,
>> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
>> exception while serving BP-1043055049-192.168.11.11-1382442676
>> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
>> java.io.IOException: Replica gen stamp < block genstamp,
>> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
>> replica=ReplicaWaitingToBeRecovered, b
>> lk_-8536558734938003208_3820986, RWR
>>   getNumBytes()     = 35840
>>   getBytesOnDisk()  = 35840
>>   getVisibleLength()= -1
>>   getVolume()       = /data/4/dn/current
>>   getBlockFile()    =
>> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>>   unlinked=false
>>         at
>> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>>         at java.lang.Thread.run(Thread.java:744)
>> 2014-02-21 10:33:30,236 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
>> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
>> 192.168.11.12:50010
>> java.io.IOException: Replica gen stamp < block genstamp,
>> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
>> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>>   getNumBytes()     = 35840
>>   getBytesOnDisk()  = 35840
>>   getVisibleLength()= -1
>>   getVolume()       = /data/4/dn/current
>>   getBlockFile()    =
>> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>>   unlinked=false
>>         at
>> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>>         at java.lang.Thread.run(Thread.java:744)
>>
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
hi, i use CDH4.4

On Fri, Feb 21, 2014 at 12:04 PM, Ted Yu <yu...@gmail.com> wrote:

> Which hadoop release are you using ?
>
> Cheers
>
>
> On Thu, Feb 20, 2014 at 8:57 PM, ch huang <ju...@gmail.com> wrote:
>
>>  hi,maillist:
>>           i see the following info in my hdfs log ,and the block belong
>> to the file which write by scribe ,i do not know why
>> is there any limit in hdfs system ?
>>
>> 2014-02-21 10:33:30,235 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
>> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
>> received exc
>> eption java.io.IOException: Replica gen stamp < block genstamp,
>> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
>> replica=ReplicaWaitingToBeRecov
>> ered, blk_-8536558734938003208_3820986, RWR
>>   getNumBytes()     = 35840
>>   getBytesOnDisk()  = 35840
>>   getVisibleLength()= -1
>>   getVolume()       = /data/4/dn/current
>>   getBlockFile()    =
>> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>>   unlinked=false
>> 2014-02-21 10:33:30,235 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> DatanodeRegistration(192.168.11.12,
>> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
>> ort=50075, ipcPort=50020,
>> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
>> exception while serving BP-1043055049-192.168.11.11-1382442676
>> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
>> java.io.IOException: Replica gen stamp < block genstamp,
>> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
>> replica=ReplicaWaitingToBeRecovered, b
>> lk_-8536558734938003208_3820986, RWR
>>   getNumBytes()     = 35840
>>   getBytesOnDisk()  = 35840
>>   getVisibleLength()= -1
>>   getVolume()       = /data/4/dn/current
>>   getBlockFile()    =
>> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>>   unlinked=false
>>         at
>> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>>         at java.lang.Thread.run(Thread.java:744)
>> 2014-02-21 10:33:30,236 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
>> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
>> 192.168.11.12:50010
>> java.io.IOException: Replica gen stamp < block genstamp,
>> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
>> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>>   getNumBytes()     = 35840
>>   getBytesOnDisk()  = 35840
>>   getVisibleLength()= -1
>>   getVolume()       = /data/4/dn/current
>>   getBlockFile()    =
>> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>>   unlinked=false
>>         at
>> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>>         at java.lang.Thread.run(Thread.java:744)
>>
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by Ted Yu <yu...@gmail.com>.
Which hadoop release are you using ?

Cheers


On Thu, Feb 20, 2014 at 8:57 PM, ch huang <ju...@gmail.com> wrote:

> hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by Ted Yu <yu...@gmail.com>.
Which hadoop release are you using ?

Cheers


On Thu, Feb 20, 2014 at 8:57 PM, ch huang <ju...@gmail.com> wrote:

> hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
i use default value it seems the value is 4096,

and also i checked hdfs user limit ,it's large enough

-bash-4.1$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 514914
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


On Fri, Feb 21, 2014 at 12:25 PM, Anurag Tangri <an...@yahoo.com>wrote:

>  Did you check your unix open file limit and data node xceiver value ?
>
> Is it too low for the number of blocks/data in your cluster ?
>
> Thanks,
> Anurag Tangri
>
> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
>
>   hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
i use default value it seems the value is 4096,

and also i checked hdfs user limit ,it's large enough

-bash-4.1$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 514914
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


On Fri, Feb 21, 2014 at 12:25 PM, Anurag Tangri <an...@yahoo.com>wrote:

>  Did you check your unix open file limit and data node xceiver value ?
>
> Is it too low for the number of blocks/data in your cluster ?
>
> Thanks,
> Anurag Tangri
>
> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
>
>   hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
i changed all datanode config add dfs.datanode.max.xcievers value is 131072
and restart all DN, still no use

On Fri, Feb 21, 2014 at 12:25 PM, Anurag Tangri <an...@yahoo.com>wrote:

>  Did you check your unix open file limit and data node xceiver value ?
>
> Is it too low for the number of blocks/data in your cluster ?
>
> Thanks,
> Anurag Tangri
>
> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
>
>   hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
one more question is if i need add the value of data node xceiver
need i add it to my NN config file?



On Fri, Feb 21, 2014 at 12:25 PM, Anurag Tangri <an...@yahoo.com>wrote:

>  Did you check your unix open file limit and data node xceiver value ?
>
> Is it too low for the number of blocks/data in your cluster ?
>
> Thanks,
> Anurag Tangri
>
> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
>
>   hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
i use default value it seems the value is 4096,

and also i checked hdfs user limit ,it's large enough

-bash-4.1$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 514914
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


On Fri, Feb 21, 2014 at 12:25 PM, Anurag Tangri <an...@yahoo.com>wrote:

>  Did you check your unix open file limit and data node xceiver value ?
>
> Is it too low for the number of blocks/data in your cluster ?
>
> Thanks,
> Anurag Tangri
>
> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
>
>   hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
i changed all datanode config add dfs.datanode.max.xcievers value is 131072
and restart all DN, still no use

On Fri, Feb 21, 2014 at 12:25 PM, Anurag Tangri <an...@yahoo.com>wrote:

>  Did you check your unix open file limit and data node xceiver value ?
>
> Is it too low for the number of blocks/data in your cluster ?
>
> Thanks,
> Anurag Tangri
>
> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
>
>   hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
one more question is if i need add the value of data node xceiver
need i add it to my NN config file?



On Fri, Feb 21, 2014 at 12:25 PM, Anurag Tangri <an...@yahoo.com>wrote:

>  Did you check your unix open file limit and data node xceiver value ?
>
> Is it too low for the number of blocks/data in your cluster ?
>
> Thanks,
> Anurag Tangri
>
> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
>
>   hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
i changed all datanode config add dfs.datanode.max.xcievers value is 131072
and restart all DN, still no use

On Fri, Feb 21, 2014 at 12:25 PM, Anurag Tangri <an...@yahoo.com>wrote:

>  Did you check your unix open file limit and data node xceiver value ?
>
> Is it too low for the number of blocks/data in your cluster ?
>
> Thanks,
> Anurag Tangri
>
> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
>
>   hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
i changed all datanode config add dfs.datanode.max.xcievers value is 131072
and restart all DN, still no use

On Fri, Feb 21, 2014 at 12:25 PM, Anurag Tangri <an...@yahoo.com>wrote:

>  Did you check your unix open file limit and data node xceiver value ?
>
> Is it too low for the number of blocks/data in your cluster ?
>
> Thanks,
> Anurag Tangri
>
> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
>
>   hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
i use default value it seems the value is 4096,

and also i checked hdfs user limit ,it's large enough

-bash-4.1$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 514914
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


On Fri, Feb 21, 2014 at 12:25 PM, Anurag Tangri <an...@yahoo.com>wrote:

>  Did you check your unix open file limit and data node xceiver value ?
>
> Is it too low for the number of blocks/data in your cluster ?
>
> Thanks,
> Anurag Tangri
>
> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
>
>   hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
one more question is if i need add the value of data node xceiver
need i add it to my NN config file?



On Fri, Feb 21, 2014 at 12:25 PM, Anurag Tangri <an...@yahoo.com>wrote:

>  Did you check your unix open file limit and data node xceiver value ?
>
> Is it too low for the number of blocks/data in your cluster ?
>
> Thanks,
> Anurag Tangri
>
> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
>
>   hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by ch huang <ju...@gmail.com>.
one more question is if i need add the value of data node xceiver
need i add it to my NN config file?



On Fri, Feb 21, 2014 at 12:25 PM, Anurag Tangri <an...@yahoo.com>wrote:

>  Did you check your unix open file limit and data node xceiver value ?
>
> Is it too low for the number of blocks/data in your cluster ?
>
> Thanks,
> Anurag Tangri
>
> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
>
>   hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by Anurag Tangri <an...@yahoo.com>.
Did you check your unix open file limit and data node xceiver value ?

Is it too low for the number of blocks/data in your cluster ? 

Thanks,
Anurag Tangri

> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
> 
> hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>  
> 2014-02-21 10:33:30,235 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240 received exc
> eption java.io.IOException: Replica gen stamp < block genstamp, block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    = /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.11.12, storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020, storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp, block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    = /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp, block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    = /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by Anurag Tangri <an...@yahoo.com>.
Did you check your unix open file limit and data node xceiver value ?

Is it too low for the number of blocks/data in your cluster ? 

Thanks,
Anurag Tangri

> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
> 
> hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>  
> 2014-02-21 10:33:30,235 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240 received exc
> eption java.io.IOException: Replica gen stamp < block genstamp, block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    = /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.11.12, storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020, storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp, block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    = /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp, block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    = /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by Anurag Tangri <an...@yahoo.com>.
Did you check your unix open file limit and data node xceiver value ?

Is it too low for the number of blocks/data in your cluster ? 

Thanks,
Anurag Tangri

> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
> 
> hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>  
> 2014-02-21 10:33:30,235 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240 received exc
> eption java.io.IOException: Replica gen stamp < block genstamp, block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    = /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.11.12, storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020, storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp, block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    = /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp, block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    = /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by Ted Yu <yu...@gmail.com>.
Which hadoop release are you using ?

Cheers


On Thu, Feb 20, 2014 at 8:57 PM, ch huang <ju...@gmail.com> wrote:

> hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by Anurag Tangri <an...@yahoo.com>.
Did you check your unix open file limit and data node xceiver value ?

Is it too low for the number of blocks/data in your cluster ? 

Thanks,
Anurag Tangri

> On Feb 20, 2014, at 6:57 PM, ch huang <ju...@gmail.com> wrote:
> 
> hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>  
> 2014-02-21 10:33:30,235 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240 received exc
> eption java.io.IOException: Replica gen stamp < block genstamp, block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    = /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.11.12, storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020, storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp, block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    = /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp, block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    = /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)

Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation "

Posted by Ted Yu <yu...@gmail.com>.
Which hadoop release are you using ?

Cheers


On Thu, Feb 20, 2014 at 8:57 PM, ch huang <ju...@gmail.com> wrote:

> hi,maillist:
>           i see the following info in my hdfs log ,and the block belong to
> the file which write by scribe ,i do not know why
> is there any limit in hdfs system ?
>
> 2014-02-21 10:33:30,235 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock
> BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240
> received exc
> eption java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecov
> ered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
> 2014-02-21 10:33:30,235 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(192.168.11.12,
> storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
> ort=50075, ipcPort=50020,
> storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got
> exception while serving BP-1043055049-192.168.11.11-1382442676
> 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, b
> lk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
> 2014-02-21 10:33:30,236 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver
> error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /
> 192.168.11.12:50010
> java.io.IOException: Replica gen stamp < block genstamp,
> block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240,
> replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
>   getNumBytes()     = 35840
>   getBytesOnDisk()  = 35840
>   getVisibleLength()= -1
>   getVolume()       = /data/4/dn/current
>   getBlockFile()    =
> /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
>   unlinked=false
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
>         at java.lang.Thread.run(Thread.java:744)
>