You are viewing a plain text version of this content. The canonical link for it is here.
Posted to general@hadoop.apache.org by Peter Haidinyak <ph...@local.com> on 2011/02/07 19:27:54 UTC

Errors in the log

HBase 0.89.20100924+28
Hadoop 0.20.2+737

During my import process I'm starting to see various warning and errors in my Hadoop logs. This just started to happen, the import process has been working for awhile. I tried to put some of the errors from the logs on various machines here to see if this is a know problem.

Thanks

-Pete

Datanode log

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.16.2.224:50010, storageID=DS-118625752-172.16.2.224-50010-1294851626750, infoPort=50075, 
ipcPort=50020):DataXceiver
java.io.EOFException: while trying to read 65557 bytes

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.16.2.224:50010, storageID=DS-118625752-172.16.2.224-50010-1294851626750, infoPort=50075, 
ipcPort=50020):DataXceiver
java.io.IOException: Interrupted receiveBlock

WARN org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to getBlockMetaDataInfo for block (=blk_2012842016347254862_70849) from datanode (=172.16.2.224
:50010)
java.io.IOException: Block blk_2012842016347254862_70849 length is 16906240 does not match block file length 16971264

WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in BlockReceiver.run(): 
java.io.IOException: Broken pipe

WARN org.apache.hadoop.hdfs.server.datanode.DataNode: checkDiskError: exception: 
java.io.IOException: Broken pipe

WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in BlockReceiver.run(): 
java.io.IOException: The stream is closed



Namenode log

WARN org.mortbay.log: /getimage: java.io.IOException: GetImage failed. java.io.IOException: Content-Length header is not provided by the namenode when trying to fetch http://0.0.0.0:50090/getimage?getimage=1


Secondary name node log

ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint: 
2011-02-07 08:51:15,062 
ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: java.io.FileNotFoundException: http://caiss01a:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=-19:84946961:0:1297097472
000:1297097169213

RE: Errors in the log

Posted by Peter Haidinyak <ph...@local.com>.
Thanks, I'll change lists in the future.

-Pete

-----Original Message-----
From: Aaron T. Myers [mailto:atm@cloudera.com] 
Sent: Monday, February 07, 2011 1:53 PM
To: cdh-user@cloudera.org
Subject: Re: Errors in the log

+cdh-user@cloudera.org
bcc: general@hadoop.apache.org

Hey Pete,

The general@hadoop.apache.org list is for high-level discussion of the
Apache Hadoop community (usually votes and governance issues.) A question
like this is more appropriate for a *-user list, and since judging by the
version numbers you're using CDH3b3, I've added cdh-user@cloudera.org.

Though I can't comment on the errors you're seeing in the DN log, I do
recognize both errors in your 2NN and NN logs. Those are due to a known bug
in CDH3b3 wherein the 2NN incorrectly determines its own host name during a
checkpoint, and so tells the NN it can be found at 0.0.0.0. (The
"&machine=0.0.0.0" is the giveaway.) This bug will be fixed in the next
release of CDH, but in the mean time the solution is just to configure
the "dfs.secondary.http.address"
to a valid machine name or IP address which will resolve to your 2NN.

--
Aaron T. Myers
Software Engineer, Cloudera



On Mon, Feb 7, 2011 at 10:27 AM, Peter Haidinyak <ph...@local.com>wrote:

> HBase 0.89.20100924+28
> Hadoop 0.20.2+737
>
> During my import process I'm starting to see various warning and errors in
> my Hadoop logs. This just started to happen, the import process has been
> working for awhile. I tried to put some of the errors from the logs on
> various machines here to see if this is a know problem.
>
> Thanks
>
> -Pete
>
> Datanode log
>
> ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(172.16.2.224:50010,
> storageID=DS-118625752-172.16.2.224-50010-1294851626750, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.io.EOFException: while trying to read 65557 bytes
>
> ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(172.16.2.224:50010,
> storageID=DS-118625752-172.16.2.224-50010-1294851626750, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.io.IOException: Interrupted receiveBlock
>
> WARN org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed
> to getBlockMetaDataInfo for block (=blk_2012842016347254862_70849) from
> datanode (=172.16.2.224
> :50010)
> java.io.IOException: Block blk_2012842016347254862_70849 length is 16906240
> does not match block file length 16971264
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in
> BlockReceiver.run():
> java.io.IOException: Broken pipe
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: checkDiskError:
> exception:
> java.io.IOException: Broken pipe
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in
> BlockReceiver.run():
> java.io.IOException: The stream is closed
>
>
>
> Namenode log
>
> WARN org.mortbay.log: /getimage: java.io.IOException: GetImage failed.
> java.io.IOException: Content-Length header is not provided by the namenode
> when trying to fetch http://0.0.0.0:50090/getimage?getimage=1
>
>
> Secondary name node log
>
> ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception
> in doCheckpoint:
> 2011-02-07 08:51:15,062
> ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> java.io.FileNotFoundException:
> http://caiss01a:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=-19:84946961:0:1297097472
> 000:1297097169213
>

Re: Errors in the log

Posted by "Aaron T. Myers" <at...@cloudera.com>.
+cdh-user@cloudera.org
bcc: general@hadoop.apache.org

Hey Pete,

The general@hadoop.apache.org list is for high-level discussion of the
Apache Hadoop community (usually votes and governance issues.) A question
like this is more appropriate for a *-user list, and since judging by the
version numbers you're using CDH3b3, I've added cdh-user@cloudera.org.

Though I can't comment on the errors you're seeing in the DN log, I do
recognize both errors in your 2NN and NN logs. Those are due to a known bug
in CDH3b3 wherein the 2NN incorrectly determines its own host name during a
checkpoint, and so tells the NN it can be found at 0.0.0.0. (The
"&machine=0.0.0.0" is the giveaway.) This bug will be fixed in the next
release of CDH, but in the mean time the solution is just to configure
the "dfs.secondary.http.address"
to a valid machine name or IP address which will resolve to your 2NN.

--
Aaron T. Myers
Software Engineer, Cloudera



On Mon, Feb 7, 2011 at 10:27 AM, Peter Haidinyak <ph...@local.com>wrote:

> HBase 0.89.20100924+28
> Hadoop 0.20.2+737
>
> During my import process I'm starting to see various warning and errors in
> my Hadoop logs. This just started to happen, the import process has been
> working for awhile. I tried to put some of the errors from the logs on
> various machines here to see if this is a know problem.
>
> Thanks
>
> -Pete
>
> Datanode log
>
> ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(172.16.2.224:50010,
> storageID=DS-118625752-172.16.2.224-50010-1294851626750, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.io.EOFException: while trying to read 65557 bytes
>
> ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(172.16.2.224:50010,
> storageID=DS-118625752-172.16.2.224-50010-1294851626750, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.io.IOException: Interrupted receiveBlock
>
> WARN org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed
> to getBlockMetaDataInfo for block (=blk_2012842016347254862_70849) from
> datanode (=172.16.2.224
> :50010)
> java.io.IOException: Block blk_2012842016347254862_70849 length is 16906240
> does not match block file length 16971264
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in
> BlockReceiver.run():
> java.io.IOException: Broken pipe
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: checkDiskError:
> exception:
> java.io.IOException: Broken pipe
>
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in
> BlockReceiver.run():
> java.io.IOException: The stream is closed
>
>
>
> Namenode log
>
> WARN org.mortbay.log: /getimage: java.io.IOException: GetImage failed.
> java.io.IOException: Content-Length header is not provided by the namenode
> when trying to fetch http://0.0.0.0:50090/getimage?getimage=1
>
>
> Secondary name node log
>
> ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception
> in doCheckpoint:
> 2011-02-07 08:51:15,062
> ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
> java.io.FileNotFoundException:
> http://caiss01a:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=-19:84946961:0:1297097472
> 000:1297097169213
>