You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2016/06/06 19:27:21 UTC

[jira] [Resolved] (HDFS-10484) Can not read file from java.io.IOException: Need XXX bytes, but only YYY bytes available

     [ https://issues.apache.org/jira/browse/HDFS-10484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Steve Loughran resolved HDFS-10484.
-----------------------------------
    Resolution: Cannot Reproduce

> Can not read file from java.io.IOException: Need XXX bytes, but only YYY  bytes available
> -----------------------------------------------------------------------------------------
>
>                 Key: HDFS-10484
>                 URL: https://issues.apache.org/jira/browse/HDFS-10484
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 2.0.0-alpha
>         Environment: Cloudera 4.1.2,  hadoop-hdfs-2.0.0+552-1.cdh4.1.2.p0.27
>            Reporter: pt
>
> We are running CDH 4.1.2 distro and trying to read file from HDFS. It ends up with exception @datanode saying
> 2016-06-02 10:43:26,354 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(X.X.X.X, storageID=DS-404876644-X.X.X.X-50010-1462535537579, infoPort=50075, ipcPort=50020, storageInfo=lv=-40;cid=cluster18;nsid=2115086255;c=0):Got exception while serving BP-2091182050-X.X.X.X-1358362115729:blk_5037101550399368941_420502314 to /X.X.X.X:58614
> java.io.IOException: Need 10172416 bytes, but only 10072576 bytes available
> at org.apache.hadoop.hdfs.server.datanode.BlockSender.waitForMinLength(BlockSender.java:387)
> at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:189)
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:268)
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:88)
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:63)
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219)
> at java.lang.Thread.run(Thread.java:662)
> 2016-06-02 10:43:26,354 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: app112.rutarget.ru:50010:DataXceiver error processing READ_BLOCK operation src: /X.X.X.X:58614 dest: /X.X.X.X:50010
> java.io.IOException: Need 10172416 bytes, but only 10072576 bytes available
> at org.apache.hadoop.hdfs.server.datanode.BlockSender.waitForMinLength(BlockSender.java:387)
> at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:189)
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:268)
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:88)
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:63)
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219)
> at java.lang.Thread.run(Thread.java:662)
> FSCK shows file as being open for write, however hdfs client that handles writes to this file closed it long time ago -- so file stucked in RBW for a few last days. How can we get actual data  block in this case? I found only binary .meta file on datanode but not actual block with data.
> -- 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org