You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by "Ganelin, Ilya" <Il...@capitalone.com> on 2014/12/02 22:20:07 UTC

SaveAsTextFile brings down data nodes with IO Exceptions

Hi all, as the last stage of execution, I am writing out a dataset to disk. Before I do this, I force the DAG to resolve so this is the only job left in the pipeline. The dataset in question is not especially large (a few gigabytes). During this step however, HDFS will inevitable crash. I will lose connection to data-nodes and get stuck in the loop of death – failure causes job restart, eventually causing the overall job to fail. On the data node logs I see the errors below. Does anyone have any ideas as to what is going on here? Thanks!


java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:455)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:741)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:718)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
        at java.lang.Thread.run(Thread.java:745)




innovationdatanode03.cof.ds.capitalone.com:1004:DataXceiver error processing WRITE_BLOCK operation  src: /10.37.248.60:44676 dst: /10.37.248.59:1004
java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.37.248.59:43692 remote=/10.37.248.63:1004]
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at java.io.FilterInputStream.read(FilterInputStream.java:83)
        at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2101)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:660)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
        at java.lang.Thread.run(Thread.java:745)




DataNode{data=FSDataset{dirpath='[/opt/cloudera/hadoop/1/dfs/dn/current, /opt/cloudera/hadoop/10/dfs/dn/current, /opt/cloudera/hadoop/2/dfs/dn/current, /opt/cloudera/hadoop/3/dfs/dn/current, /opt/cloudera/hadoop/4/dfs/dn/current, /opt/cloudera/hadoop/5/dfs/dn/current, /opt/cloudera/hadoop/6/dfs/dn/current, /opt/cloudera/hadoop/7/dfs/dn/current, /opt/cloudera/hadoop/8/dfs/dn/current, /opt/cloudera/hadoop/9/dfs/dn/current]'}, localName='innovationdatanode03.cof.ds.capitalone.com:1004', datanodeUuid='e8a11fe2-300f-4e78-9211-f2ee41af6b8c', xmitsInProgress=0}:Exception transfering block BP-1458718292-10.37.248.67-1398976716371:blk_1076854538_3118445 to mirror 10.37.248.63:1004: java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.37.248.59:43692 remote=/10.37.248.63:1004]
________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed.  If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.

Re: SaveAsTextFile brings down data nodes with IO Exceptions

Posted by Ilya Ganelin <il...@gmail.com>.
All - this issue showed up when I was tearing down a spark context and
creating a new one. Often, I was unable to then write to HDFS due to this
error. I subsequently switched to a different implementation where instead
of tearing down and re initializing the spark context I'd instead submit a
separate request to YARN.
On Fri, May 15, 2015 at 2:35 PM Puneet Kapoor <pu...@gmail.com>
wrote:

> I am seeing this on hadoop 2.4.0 version.
>
> Thanks for your suggestions, i will try those and let you know if they
> help !
>
> On Sat, May 16, 2015 at 1:57 AM, Steve Loughran <st...@hortonworks.com>
> wrote:
>
>>  What version of Hadoop are you seeing this on?
>>
>>
>>  On 15 May 2015, at 20:03, Puneet Kapoor <pu...@gmail.com>
>> wrote:
>>
>>  Hey,
>>
>>  Did you find any solution for this issue, we are seeing similar logs in
>> our Data node logs. Appreciate any help.
>>
>>
>>
>>
>>
>>  2015-05-15 10:51:43,615 ERROR
>> org.apache.hadoop.hdfs.server.datanode.DataNode:
>> NttUpgradeDN1:50010:DataXceiver error processing WRITE_BLOCK operation
>>  src: /192.168.112.190:46253 dst: /192.168.151.104:50010
>> java.net.SocketTimeoutException: 60000 millis timeout while waiting for
>> channel to be ready for read. ch :
>> java.nio.channels.SocketChannel[connected local=/192.168.151.104:50010
>> remote=/192.168.112.190:46253]
>>         at
>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
>>         at
>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>>         at
>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>>         at java.io.BufferedInputStream.fill(Unknown Source)
>>         at java.io.BufferedInputStream.read1(Unknown Source)
>>         at java.io.BufferedInputStream.read(Unknown Source)
>>         at java.io.DataInputStream.read(Unknown Source)
>>         at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:742)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>>         at
>> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>>         at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
>>         at java.lang.Thread.run(Unknown Source)
>>
>>
>>  That's being logged @ error level in DN. It doesn't mean the DN has
>> crashed, only that it timed out waiting for data: something has gone wrong
>> elsewhere.
>>
>>  https://issues.apache.org/jira/browse/HDFS-693
>>
>>
>> there's a couple of properties you can do to extend timeouts
>>
>>   <property>
>>
>>         <name>dfs.socket.timeout</name>
>>
>>         <value>20000</value>
>>
>>     </property>
>>
>>
>>     <property>
>>
>>         <name>dfs.datanode.socket.write.timeout</name>
>>
>>         <value>20000</value>
>>
>>     </property>
>>
>>
>>
>> You can also increase the number of data node tranceiver threads to
>> handle data IO across the network
>>
>>
>> <property>
>> <name>dfs.datanode.max.xcievers</name>
>> <value>4096</value>
>> </property>
>>
>> Yes, that property has that explicit spellinng, it's easy to get wrong
>>
>>
>

Re: SaveAsTextFile brings down data nodes with IO Exceptions

Posted by Puneet Kapoor <pu...@gmail.com>.
I am seeing this on hadoop 2.4.0 version.

Thanks for your suggestions, i will try those and let you know if they help
!

On Sat, May 16, 2015 at 1:57 AM, Steve Loughran <st...@hortonworks.com>
wrote:

>  What version of Hadoop are you seeing this on?
>
>
>  On 15 May 2015, at 20:03, Puneet Kapoor <pu...@gmail.com>
> wrote:
>
>  Hey,
>
>  Did you find any solution for this issue, we are seeing similar logs in
> our Data node logs. Appreciate any help.
>
>
>
>
>
>  2015-05-15 10:51:43,615 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> NttUpgradeDN1:50010:DataXceiver error processing WRITE_BLOCK operation
>  src: /192.168.112.190:46253 dst: /192.168.151.104:50010
> java.net.SocketTimeoutException: 60000 millis timeout while waiting for
> channel to be ready for read. ch :
> java.nio.channels.SocketChannel[connected local=/192.168.151.104:50010
> remote=/192.168.112.190:46253]
>         at
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
>         at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>         at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>         at java.io.BufferedInputStream.fill(Unknown Source)
>         at java.io.BufferedInputStream.read1(Unknown Source)
>         at java.io.BufferedInputStream.read(Unknown Source)
>         at java.io.DataInputStream.read(Unknown Source)
>         at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
>         at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:742)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>         at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
>         at java.lang.Thread.run(Unknown Source)
>
>
>  That's being logged @ error level in DN. It doesn't mean the DN has
> crashed, only that it timed out waiting for data: something has gone wrong
> elsewhere.
>
>  https://issues.apache.org/jira/browse/HDFS-693
>
>
> there's a couple of properties you can do to extend timeouts
>
>   <property>
>
>         <name>dfs.socket.timeout</name>
>
>         <value>20000</value>
>
>     </property>
>
>
>     <property>
>
>         <name>dfs.datanode.socket.write.timeout</name>
>
>         <value>20000</value>
>
>     </property>
>
>
>
> You can also increase the number of data node tranceiver threads to handle
> data IO across the network
>
>
> <property>
> <name>dfs.datanode.max.xcievers</name>
> <value>4096</value>
> </property>
>
> Yes, that property has that explicit spellinng, it's easy to get wrong
>
>

Re: SaveAsTextFile brings down data nodes with IO Exceptions

Posted by Steve Loughran <st...@hortonworks.com>.
What version of Hadoop are you seeing this on?


On 15 May 2015, at 20:03, Puneet Kapoor <pu...@gmail.com>> wrote:

Hey,

Did you find any solution for this issue, we are seeing similar logs in our Data node logs. Appreciate any help.





2015-05-15 10:51:43,615 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: NttUpgradeDN1:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.112.190:46253<http://192.168.112.190:46253/> dst: /192.168.151.104:50010<http://192.168.151.104:50010/>
java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.151.104:50010<http://192.168.151.104:50010/> remote=/192.168.112.190:46253<http://192.168.112.190:46253/>]
        at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.BufferedInputStream.fill(Unknown Source)
        at java.io.BufferedInputStream.read1(Unknown Source)
        at java.io.BufferedInputStream.read(Unknown Source)
        at java.io.DataInputStream.read(Unknown Source)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:742)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
        at java.lang.Thread.run(Unknown Source)


That's being logged @ error level in DN. It doesn't mean the DN has crashed, only that it timed out waiting for data: something has gone wrong elsewhere.


https://issues.apache.org/jira/browse/HDFS-693


there's a couple of properties you can do to extend timeouts

  <property>

        <name>dfs.socket.timeout</name>

        <value>20000</value>

    </property>


    <property>

        <name>dfs.datanode.socket.write.timeout</name>

        <value>20000</value>

    </property>


You can also increase the number of data node tranceiver threads to handle data IO across the network


<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>

Yes, that property has that explicit spellinng, it's easy to get wrong


Re: SaveAsTextFile brings down data nodes with IO Exceptions

Posted by Puneet Kapoor <pu...@gmail.com>.
Hey,

Did you find any solution for this issue, we are seeing similar logs in our
Data node logs. Appreciate any help.


2015-05-15 10:51:43,615 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
NttUpgradeDN1:50010:DataXceiver error processing WRITE_BLOCK operation
 src: /192.168.112.190:46253 dst: /192.168.151.104:50010
java.net.SocketTimeoutException: 60000 millis timeout while waiting for
channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/192.168.151.104:50010
remote=/192.168.112.190:46253]
        at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
        at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.BufferedInputStream.fill(Unknown Source)
        at java.io.BufferedInputStream.read1(Unknown Source)
        at java.io.BufferedInputStream.read(Unknown Source)
        at java.io.DataInputStream.read(Unknown Source)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
        at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
        at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
        at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:742)
        at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
        at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
        at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
        at java.lang.Thread.run(Unknown Source)

Thanks
Puneet

On Wed, Dec 3, 2014 at 2:50 AM, Ganelin, Ilya <Il...@capitalone.com>
wrote:

> Hi all, as the last stage of execution, I am writing out a dataset to disk. Before I do this, I force the DAG to resolve so this is the only job left in the pipeline. The dataset in question is not especially large (a few gigabytes). During this step however, HDFS will inevitable crash. I will lose connection to data-nodes and get stuck in the loop of death – failure causes job restart, eventually causing the overall job to fail. On the data node logs I see the errors below. Does anyone have any ideas as to what is going on here? Thanks!
>
>
> java.io.IOException: Premature EOF from inputStream
> 	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:455)
> 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:741)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:718)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
> 	at java.lang.Thread.run(Thread.java:745)
>
>
>
>
> innovationdatanode03.cof.ds.capitalone.com:1004:DataXceiver error processing WRITE_BLOCK operation  src: /10.37.248.60:44676 dst: /10.37.248.59:1004
> java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.37.248.59:43692 remote=/10.37.248.63:1004]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:83)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:83)
> 	at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2101)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:660)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
> 	at java.lang.Thread.run(Thread.java:745)
>
>
>
>
> DataNode{data=FSDataset{dirpath='[/opt/cloudera/hadoop/1/dfs/dn/current, /opt/cloudera/hadoop/10/dfs/dn/current, /opt/cloudera/hadoop/2/dfs/dn/current, /opt/cloudera/hadoop/3/dfs/dn/current, /opt/cloudera/hadoop/4/dfs/dn/current, /opt/cloudera/hadoop/5/dfs/dn/current, /opt/cloudera/hadoop/6/dfs/dn/current, /opt/cloudera/hadoop/7/dfs/dn/current, /opt/cloudera/hadoop/8/dfs/dn/current, /opt/cloudera/hadoop/9/dfs/dn/current]'}, localName='innovationdatanode03.cof.ds.capitalone.com:1004', datanodeUuid='e8a11fe2-300f-4e78-9211-f2ee41af6b8c', xmitsInProgress=0}:Exception transfering block BP-1458718292-10.37.248.67-1398976716371:blk_1076854538_3118445 to mirror 10.37.248.63:1004: java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.37.248.59:43692 remote=/10.37.248.63:1004]
>
>
> ------------------------------
>
> The information contained in this e-mail is confidential and/or
> proprietary to Capital One and/or its affiliates. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed.  If the reader of this message is not the
> intended recipient, you are hereby notified that any review,
> retransmission, dissemination, distribution, copying or other use of, or
> taking of any action in reliance upon this information is strictly
> prohibited. If you have received this communication in error, please
> contact the sender and delete the material from your computer.
>