You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Stephen Boesch <ja...@gmail.com> on 2013/05/24 07:38:51 UTC
Hint on EOFException's on datanodes
On a smallish (10 node) cluster with only 2 mappers per node after a few
minutes EOFExceptions are cropping up on the datanodes: an example is shown
below.
Any hint on what to tweak/change in hadoop / cluster settings to make this
more happy?
2013-05-24 05:03:57,460 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode
(org.apache.hadoop.hdfs.server.datanode.DataXceiver@1b1accfc): writeBlock
blk_7760450154173670997_48372 received exception java.io.EOFException:
while trying to read 65557 bytes
2013-05-24 05:03:57,262 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode (PacketResponder 0 for
Block blk_-3990749197748165818_48331): PacketResponder 0 for block
blk_-3990749197748165818_48331 terminating
2013-05-24 05:03:57,460 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode
(org.apache.hadoop.hdfs.server.datanode.DataXceiver@1b1accfc):
DatanodeRegistration(10.254.40.79:9200,
storageID=DS-1106090267-10.254.40.79-9200-1369343833886, infoPort=9102,
ipcPort=9201):DataXceiver
java.io.EOFException: while trying to read 65557 bytes
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:268)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:312)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:376)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:406)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
at java.lang.Thread.run(Thread.java:662)
2013-05-24 05:03:57,261 INFO org.apache.hadoop.hdfs.server.datanode.Dat
Re: Hint on EOFException's on datanodes
Posted by Azuryy Yu <az...@gmail.com>.
maybe network issue, datanode received an incomplete packet.
--Send from my Sony mobile.
On May 24, 2013 1:39 PM, "Stephen Boesch" <ja...@gmail.com> wrote:
>
> On a smallish (10 node) cluster with only 2 mappers per node after a few
> minutes EOFExceptions are cropping up on the datanodes: an example is shown
> below.
>
> Any hint on what to tweak/change in hadoop / cluster settings to make this
> more happy?
>
>
> 2013-05-24 05:03:57,460 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode
> (org.apache.hadoop.hdfs.server.datanode.DataXceiver@1b1accfc): writeBlock
> blk_7760450154173670997_48372 received exception java.io.EOFException:
> while trying to read 65557 bytes
> 2013-05-24 05:03:57,262 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode (PacketResponder 0 for
> Block blk_-3990749197748165818_48331): PacketResponder 0 for block
> blk_-3990749197748165818_48331 terminating
> 2013-05-24 05:03:57,460 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode
> (org.apache.hadoop.hdfs.server.datanode.DataXceiver@1b1accfc):
> DatanodeRegistration(10.254.40.79:9200,
> storageID=DS-1106090267-10.254.40.79-9200-1369343833886, infoPort=9102,
> ipcPort=9201):DataXceiver
> java.io.EOFException: while trying to read 65557 bytes
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:268)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:312)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:376)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:406)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
> at java.lang.Thread.run(Thread.java:662)
> 2013-05-24 05:03:57,261 INFO org.apache.hadoop.hdfs.server.datanode.Dat
>
Re: Hint on EOFException's on datanodes
Posted by Azuryy Yu <az...@gmail.com>.
maybe network issue, datanode received an incomplete packet.
--Send from my Sony mobile.
On May 24, 2013 1:39 PM, "Stephen Boesch" <ja...@gmail.com> wrote:
>
> On a smallish (10 node) cluster with only 2 mappers per node after a few
> minutes EOFExceptions are cropping up on the datanodes: an example is shown
> below.
>
> Any hint on what to tweak/change in hadoop / cluster settings to make this
> more happy?
>
>
> 2013-05-24 05:03:57,460 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode
> (org.apache.hadoop.hdfs.server.datanode.DataXceiver@1b1accfc): writeBlock
> blk_7760450154173670997_48372 received exception java.io.EOFException:
> while trying to read 65557 bytes
> 2013-05-24 05:03:57,262 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode (PacketResponder 0 for
> Block blk_-3990749197748165818_48331): PacketResponder 0 for block
> blk_-3990749197748165818_48331 terminating
> 2013-05-24 05:03:57,460 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode
> (org.apache.hadoop.hdfs.server.datanode.DataXceiver@1b1accfc):
> DatanodeRegistration(10.254.40.79:9200,
> storageID=DS-1106090267-10.254.40.79-9200-1369343833886, infoPort=9102,
> ipcPort=9201):DataXceiver
> java.io.EOFException: while trying to read 65557 bytes
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:268)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:312)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:376)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:406)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
> at java.lang.Thread.run(Thread.java:662)
> 2013-05-24 05:03:57,261 INFO org.apache.hadoop.hdfs.server.datanode.Dat
>
Re: Hint on EOFException's on datanodes
Posted by Azuryy Yu <az...@gmail.com>.
maybe network issue, datanode received an incomplete packet.
--Send from my Sony mobile.
On May 24, 2013 1:39 PM, "Stephen Boesch" <ja...@gmail.com> wrote:
>
> On a smallish (10 node) cluster with only 2 mappers per node after a few
> minutes EOFExceptions are cropping up on the datanodes: an example is shown
> below.
>
> Any hint on what to tweak/change in hadoop / cluster settings to make this
> more happy?
>
>
> 2013-05-24 05:03:57,460 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode
> (org.apache.hadoop.hdfs.server.datanode.DataXceiver@1b1accfc): writeBlock
> blk_7760450154173670997_48372 received exception java.io.EOFException:
> while trying to read 65557 bytes
> 2013-05-24 05:03:57,262 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode (PacketResponder 0 for
> Block blk_-3990749197748165818_48331): PacketResponder 0 for block
> blk_-3990749197748165818_48331 terminating
> 2013-05-24 05:03:57,460 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode
> (org.apache.hadoop.hdfs.server.datanode.DataXceiver@1b1accfc):
> DatanodeRegistration(10.254.40.79:9200,
> storageID=DS-1106090267-10.254.40.79-9200-1369343833886, infoPort=9102,
> ipcPort=9201):DataXceiver
> java.io.EOFException: while trying to read 65557 bytes
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:268)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:312)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:376)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:406)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
> at java.lang.Thread.run(Thread.java:662)
> 2013-05-24 05:03:57,261 INFO org.apache.hadoop.hdfs.server.datanode.Dat
>
Re: Hint on EOFException's on datanodes
Posted by Azuryy Yu <az...@gmail.com>.
maybe network issue, datanode received an incomplete packet.
--Send from my Sony mobile.
On May 24, 2013 1:39 PM, "Stephen Boesch" <ja...@gmail.com> wrote:
>
> On a smallish (10 node) cluster with only 2 mappers per node after a few
> minutes EOFExceptions are cropping up on the datanodes: an example is shown
> below.
>
> Any hint on what to tweak/change in hadoop / cluster settings to make this
> more happy?
>
>
> 2013-05-24 05:03:57,460 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode
> (org.apache.hadoop.hdfs.server.datanode.DataXceiver@1b1accfc): writeBlock
> blk_7760450154173670997_48372 received exception java.io.EOFException:
> while trying to read 65557 bytes
> 2013-05-24 05:03:57,262 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode (PacketResponder 0 for
> Block blk_-3990749197748165818_48331): PacketResponder 0 for block
> blk_-3990749197748165818_48331 terminating
> 2013-05-24 05:03:57,460 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode
> (org.apache.hadoop.hdfs.server.datanode.DataXceiver@1b1accfc):
> DatanodeRegistration(10.254.40.79:9200,
> storageID=DS-1106090267-10.254.40.79-9200-1369343833886, infoPort=9102,
> ipcPort=9201):DataXceiver
> java.io.EOFException: while trying to read 65557 bytes
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:268)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:312)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:376)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:406)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
> at java.lang.Thread.run(Thread.java:662)
> 2013-05-24 05:03:57,261 INFO org.apache.hadoop.hdfs.server.datanode.Dat
>