You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by madhu phatak <ph...@gmail.com> on 2012/05/04 14:46:17 UTC

Bad connect ack with firstBadLink

Hi,
We are running a three node cluster . From two days whenever we copy file
to hdfs , it is throwing  java.IO.Exception Bad connect ack with
firstBadLink . I searched in net, but not able to resolve the issue. The
following is the stack trace from datanode log

2012-05-04 18:08:08,868 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
blk_-7520371350112346377_50118 received exception java.net.SocketException:
Connection reset
2012-05-04 18:08:08,869 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
172.23.208.17:50010,
storageID=DS-1340171424-172.23.208.17-50010-1334672673051, infoPort=50075,
ipcPort=50020):DataXceiver
java.net.SocketException: Connection reset
        at java.net.SocketInputStream.read(SocketInputStream.java:168)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
        at java.io.DataInputStream.read(DataInputStream.java:132)
        at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:262)
        at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:309)
        at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:373)
        at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:525)
        at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:357)
        at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
        at java.lang.Thread.run(Thread.java:662)


It will be great if some one can point to the direction how to solve this
problem.

-- 
https://github.com/zinnia-phatak-dev/Nectar

Re: Bad connect ack with firstBadLink

Posted by madhu phatak <ph...@gmail.com>.
Hi,
 Increasing the open file limit solved the issue. Thank you.

On Fri, May 4, 2012 at 9:39 PM, Mapred Learn <ma...@gmail.com> wrote:

> Check your number of blocks in the cluster.
>
> This also indicates that your datanodes are more full than they should be.
>
> Try deleting unnecessary blocks.
>
> On Fri, May 4, 2012 at 7:40 AM, Mohit Anchlia <mohitanchlia@gmail.com
> >wrote:
>
> > Please see:
> >
> > http://hbase.apache.org/book.html#dfs.datanode.max.xcievers
> >
> > On Fri, May 4, 2012 at 5:46 AM, madhu phatak <ph...@gmail.com>
> wrote:
> >
> > > Hi,
> > > We are running a three node cluster . From two days whenever we copy
> file
> > > to hdfs , it is throwing  java.IO.Exception Bad connect ack with
> > > firstBadLink . I searched in net, but not able to resolve the issue.
> The
> > > following is the stack trace from datanode log
> > >
> > > 2012-05-04 18:08:08,868 INFO
> > > org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> > > blk_-7520371350112346377_50118 received exception
> > java.net.SocketException:
> > > Connection reset
> > > 2012-05-04 18:08:08,869 ERROR
> > > org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> > > 172.23.208.17:50010,
> > > storageID=DS-1340171424-172.23.208.17-50010-1334672673051,
> > infoPort=50075,
> > > ipcPort=50020):DataXceiver
> > > java.net.SocketException: Connection reset
> > >        at java.net.SocketInputStream.read(SocketInputStream.java:168)
> > >        at
> java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> > >        at
> java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> > >        at java.io.DataInputStream.read(DataInputStream.java:132)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:262)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:309)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:373)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:525)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:357)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
> > >        at java.lang.Thread.run(Thread.java:662)
> > >
> > >
> > > It will be great if some one can point to the direction how to solve
> this
> > > problem.
> > >
> > > --
> > > https://github.com/zinnia-phatak-dev/Nectar
> > >
> >
>



-- 
https://github.com/zinnia-phatak-dev/Nectar

Re: Bad connect ack with firstBadLink

Posted by Mapred Learn <ma...@gmail.com>.
Check your number of blocks in the cluster.

This also indicates that your datanodes are more full than they should be.

Try deleting unnecessary blocks.

On Fri, May 4, 2012 at 7:40 AM, Mohit Anchlia <mo...@gmail.com>wrote:

> Please see:
>
> http://hbase.apache.org/book.html#dfs.datanode.max.xcievers
>
> On Fri, May 4, 2012 at 5:46 AM, madhu phatak <ph...@gmail.com> wrote:
>
> > Hi,
> > We are running a three node cluster . From two days whenever we copy file
> > to hdfs , it is throwing  java.IO.Exception Bad connect ack with
> > firstBadLink . I searched in net, but not able to resolve the issue. The
> > following is the stack trace from datanode log
> >
> > 2012-05-04 18:08:08,868 INFO
> > org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> > blk_-7520371350112346377_50118 received exception
> java.net.SocketException:
> > Connection reset
> > 2012-05-04 18:08:08,869 ERROR
> > org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> > 172.23.208.17:50010,
> > storageID=DS-1340171424-172.23.208.17-50010-1334672673051,
> infoPort=50075,
> > ipcPort=50020):DataXceiver
> > java.net.SocketException: Connection reset
> >        at java.net.SocketInputStream.read(SocketInputStream.java:168)
> >        at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> >        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> >        at java.io.DataInputStream.read(DataInputStream.java:132)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:262)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:309)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:373)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:525)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:357)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
> >        at java.lang.Thread.run(Thread.java:662)
> >
> >
> > It will be great if some one can point to the direction how to solve this
> > problem.
> >
> > --
> > https://github.com/zinnia-phatak-dev/Nectar
> >
>

Re: Bad connect ack with firstBadLink

Posted by Mohit Anchlia <mo...@gmail.com>.
Please see:

http://hbase.apache.org/book.html#dfs.datanode.max.xcievers

On Fri, May 4, 2012 at 5:46 AM, madhu phatak <ph...@gmail.com> wrote:

> Hi,
> We are running a three node cluster . From two days whenever we copy file
> to hdfs , it is throwing  java.IO.Exception Bad connect ack with
> firstBadLink . I searched in net, but not able to resolve the issue. The
> following is the stack trace from datanode log
>
> 2012-05-04 18:08:08,868 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> blk_-7520371350112346377_50118 received exception java.net.SocketException:
> Connection reset
> 2012-05-04 18:08:08,869 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 172.23.208.17:50010,
> storageID=DS-1340171424-172.23.208.17-50010-1334672673051, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.net.SocketException: Connection reset
>        at java.net.SocketInputStream.read(SocketInputStream.java:168)
>        at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
>        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>        at java.io.DataInputStream.read(DataInputStream.java:132)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:262)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:309)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:373)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:525)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:357)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
>        at java.lang.Thread.run(Thread.java:662)
>
>
> It will be great if some one can point to the direction how to solve this
> problem.
>
> --
> https://github.com/zinnia-phatak-dev/Nectar
>