You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Sean Mackrory (JIRA)" <ji...@apache.org> on 2017/07/17 14:22:00 UTC

[jira] [Created] (HDFS-12151) Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes

Sean Mackrory created HDFS-12151:
------------------------------------

             Summary: Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
                 Key: HDFS-12151
                 URL: https://issues.apache.org/jira/browse/HDFS-12151
             Project: Hadoop HDFS
          Issue Type: Sub-task
            Reporter: Sean Mackrory
            Assignee: Sean Mackrory


Trying to write to a Hadoop 3 DataNode with a Hadoop 2 client currently fails. On the client side it looks like this:
{code}
    17/07/14 13:31:58 INFO hdfs.DFSClient: Exception in createBlockOutputStream
    java.io.EOFException: Premature EOF: no length prefix available
            at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280)
            at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
            at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
            at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449){code}

But on the DataNode side there's an ArrayOutOfBoundsException because there aren't any targetStorageTypes:
{code}
    java.lang.ArrayIndexOutOfBoundsException: 0
            at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
            at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
            at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
            at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
            at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org