You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by "raghu.nabbler" <ra...@gmail.com> on 2009/01/10 10:30:10 UTC
Re: Error "Bad connect ack with firstBadLink" when using
hadoop-0.17.2
"hi
I am having the same problem, did you get a solution for it "
Prasad Vishnubhotla wrote:
>
> I am using hadoop-0.17.2 on a two-node system - master (9.26.118.116) &
> slave (9.41.83.82) - with datanode running on both machines. After
> successfully executing "hadoop namenode -format" and "start-dfs.sh" on the
> master machine, if I run the command "hadoop dfs -copyFromFile srcDir
> dstDir" from the master machine, I am getting the following errors:
>
> 08/08/26 01:27:03 INFO dfs.DFSClient: Exception in createBlockOutputStream
> java.io.IOException: Bad connect ack with firstBadLink 9.41.83.82:50200
> 08/08/26 01:27:03 INFO dfs.DFSClient: Abandoning block
> blk_-850157504086372990
> 08/08/26 01:27:10 INFO dfs.DFSClient: Exception in createBlockOutputStream
> java.io.IOException: Bad connect ack with firstBadLink 9.41.83.82:50200
> 08/08/26 01:27:10 INFO dfs.DFSClient: Abandoning block
> blk_841485342076617732
> 08/08/26 01:27:16 INFO dfs.DFSClient: Exception in createBlockOutputStream
> java.io.IOException: Bad connect ack with firstBadLink 9.41.83.82:50200
> 08/08/26 01:27:16 INFO dfs.DFSClient: Abandoning block
> blk_-8082476515595048888
> 08/08/26 01:27:22 INFO dfs.DFSClient: Exception in createBlockOutputStream
> java.io.IOException: Bad connect ack with firstBadLink 9.41.83.82:50200
> 08/08/26 01:27:22 INFO dfs.DFSClient: Abandoning block
> blk_-1811482700203630462
> 08/08/26 01:27:28 WARN dfs.DFSClient: DataStreamer Exception:
> java.io.IOException: Unable to create new block.
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2246)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1700(DFSClient.java:1702)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1842)
>
> 08/08/26 01:27:28 WARN dfs.DFSClient: Error Recovery for block
> blk_-1811482700203630462 bad datanode[1]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /user/vishnu/dstDir/mdticks-4675696.del
> java.io.IOException: Could not get block locations. Aborting...
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2081)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1300(DFSClient.java:1702)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1818)
>
>
> Here are Namenode and Datanode logs on the master machine and Datanode log
> on the slave machine.
>
> Master's Namenode Log:
>
> 2008-08-26 01:25:32,329 INFO org.apache.hadoop.dfs.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG: host = tigris/9.26.118.116
> STARTUP_MSG: args = []
> STARTUP_MSG: version = 0.17.2
> STARTUP_MSG: build =
> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r
> 684969; compiled by 'oom' on Wed Aug 13 05:29:53 UTC 2008
> ************************************************************/
> 2008-08-26 01:25:32,468 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=50310
> 2008-08-26 01:25:32,474 INFO org.apache.hadoop.dfs.NameNode: Namenode up
> at: tigris.torolab.ibm.com/9.26.118.116:50310
> 2008-08-26 01:25:32,479 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2008-08-26 01:25:32,481 INFO org.apache.hadoop.dfs.NameNodeMetrics:
> Initializing NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullCont
> ext
> 2008-08-26 01:25:32,561 INFO org.apache.hadoop.fs.FSNamesystem:
> fsOwner=vishnu,dasadm1
> 2008-08-26 01:25:32,561 INFO org.apache.hadoop.fs.FSNamesystem:
> supergroup=supergroup
> 2008-08-26 01:25:32,561 INFO org.apache.hadoop.fs.FSNamesystem:
> isPermissionEnabled=true
> 2008-08-26 01:25:32,615 INFO org.apache.hadoop.fs.FSNamesystem: Finished
> loading FSImage in 90 msecs
> 2008-08-26 01:25:32,617 INFO org.apache.hadoop.dfs.StateChange: STATE*
> Leaving safe mode after 0 secs.
> 2008-08-26 01:25:32,618 INFO org.apache.hadoop.dfs.StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2008-08-26 01:25:32,618 INFO org.apache.hadoop.dfs.StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2008-08-26 01:25:32,626 INFO org.apache.hadoop.fs.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2008-08-26 01:25:32,700 INFO org.mortbay.util.Credential: Checking
> Resource aliases
> 2008-08-26 01:25:32,862 INFO org.mortbay.http.HttpServer: Version
> Jetty/5.1.4
> 2008-08-26 01:25:33,106 INFO org.mortbay.util.Container: Started
> org.mortbay.jetty.servlet.WebApplicationHandler@45e67e6a
> 2008-08-26 01:25:33,132 INFO org.mortbay.util.Container: Started
> WebApplicationContext[/,/]
> 2008-08-26 01:25:33,132 INFO org.mortbay.util.Container: Started
> HttpContext[/logs,/logs]
> 2008-08-26 01:25:33,132 INFO org.mortbay.util.Container: Started
> HttpContext[/static,/static]
> 2008-08-26 01:25:33,134 INFO org.mortbay.http.SocketListener: Started
> SocketListener on 0.0.0.0:50070
> 2008-08-26 01:25:33,134 INFO org.mortbay.util.Container: Started
> org.mortbay.jetty.Server@13b06041
> 2008-08-26 01:25:33,134 INFO org.apache.hadoop.fs.FSNamesystem: Web-server
> up at: 0.0.0.0:50070
> 2008-08-26 01:25:33,134 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2008-08-26 01:25:33,136 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 50310: starting
> 2008-08-26 01:25:33,136 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 50310: starting
> 2008-08-26 01:25:33,136 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 50310: starting
> 2008-08-26 01:25:33,136 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 50310: starting
> 2008-08-26 01:25:33,136 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 50310: starting
> 2008-08-26 01:25:33,137 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 50310: starting
> 2008-08-26 01:25:33,137 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 50310: starting
> 2008-08-26 01:25:33,137 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 50310: starting
> 2008-08-26 01:25:33,137 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 50310: starting
> 2008-08-26 01:25:33,137 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 8 on 50310: starting
> 2008-08-26 01:25:33,140 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 9 on 50310: starting
> 2008-08-26 01:25:54,946 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.registerDatanode: node registration from 9.41.83.82:50200
> storage DS-176826
> 3747-9.41.83.82-50200-1219727607112
> 2008-08-26 01:25:54,950 INFO org.apache.hadoop.net.NetworkTopology: Adding
> a new node: /default-rack/9.41.83.82:50200
> 2008-08-26 01:25:57,490 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.registerDatanode: node registration from 9.26.118.116:50200
> storage DS-6951
> 23498-9.26.118.116-50200-1219728357488
> 2008-08-26 01:25:57,490 INFO org.apache.hadoop.net.NetworkTopology: Adding
> a new node: /default-rack/9.26.118.116:50200
> 2008-08-26 01:27:03,766 INFO org.apache.hadoop.fs.FSNamesystem: Number of
> transactions: 5 Total time for transactions(ms): 2 Number of syncs: 2
> SyncTimes(ms)
> : 55
> 2008-08-26 01:27:03,807 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.allocateBlock: /user/vishnu/mspoc/mdticks-4675696.del.
> blk_-850157504086372
> 990
> 2008-08-26 01:27:09,945 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.allocateBlock: /user/vishnu/mspoc/mdticks-4675696.del.
> blk_8414853420766177
> 32
> 2008-08-26 01:27:16,024 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.allocateBlock: /user/vishnu/mspoc/mdticks-4675696.del.
> blk_-808247651559504
> 8888
> 2008-08-26 01:27:22,108 INFO org.apache.hadoop.dfs.StateChange: BLOCK*
> NameSystem.allocateBlock: /user/vishnu/mspoc/mdticks-4675696.del.
> blk_-181148270020363
> 0462
>
> Master's Datanode Log:
>
> 2008-08-26 01:25:33,957 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG: host = tigris/9.26.118.116
> STARTUP_MSG: args = []
> STARTUP_MSG: version = 0.17.2
> STARTUP_MSG: build =
> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r
> 684969; compiled by 'oom' on Wed Aug 13 05:29:53 UTC 2008
> ************************************************************/
> 2008-08-26 01:25:34,156 INFO org.apache.hadoop.dfs.Storage: Storage
> directory /tmp/hadoop-vishnu/dfs/data is not formatted.
> 2008-08-26 01:25:34,156 INFO org.apache.hadoop.dfs.Storage: Formatting ...
> 2008-08-26 01:25:34,430 INFO org.apache.hadoop.dfs.DataNode: Registered
> FSDatasetStatusMBean
> 2008-08-26 01:25:34,431 INFO org.apache.hadoop.dfs.DataNode: Opened server
> at 50200
> 2008-08-26 01:25:34,433 INFO org.apache.hadoop.dfs.DataNode: Balancing
> bandwith is 1048576 bytes/s
> 2008-08-26 01:25:34,486 INFO org.mortbay.util.Credential: Checking
> Resource aliases
> 2008-08-26 01:25:34,580 INFO org.mortbay.http.HttpServer: Version
> Jetty/5.1.4
> 2008-08-26 01:25:34,977 INFO org.mortbay.util.Container: Started
> org.mortbay.jetty.servlet.WebApplicationHandler@1eb3319f
> 2008-08-26 01:25:35,002 INFO org.mortbay.util.Container: Started
> WebApplicationContext[/,/]
> 2008-08-26 01:25:35,002 INFO org.mortbay.util.Container: Started
> HttpContext[/logs,/logs]
> 2008-08-26 01:25:35,002 INFO org.mortbay.util.Container: Started
> HttpContext[/static,/static]
> 2008-08-26 01:25:35,004 INFO org.mortbay.http.SocketListener: Started
> SocketListener on 0.0.0.0:50075
> 2008-08-26 01:25:35,004 INFO org.mortbay.util.Container: Started
> org.mortbay.jetty.Server@709446e4
> 2008-08-26 01:25:35,007 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2008-08-26 01:25:57,551 INFO org.apache.hadoop.dfs.DataNode: New storage
> id DS-695123498-9.26.118.116-50200-1219728357488 is assigned to data-node
> 9.26.118.1
> 16:50200
> 2008-08-26 01:25:57,551 INFO org.apache.hadoop.dfs.DataNode:
> 9.26.118.116:50200In DataNode.run, data =
> FSDataset{dirpath='/tmp/hadoop-vishnu/dfs/data/current
> '}
> 2008-08-26 01:25:57,554 INFO org.apache.hadoop.dfs.DataNode: using
> BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 60000msec
> 2008-08-26 01:26:51,699 INFO org.apache.hadoop.dfs.DataNode: BlockReport
> of 0 blocks got processed in 3 msecs
> 2008-08-26 01:27:03,847 INFO org.apache.hadoop.dfs.DataNode: Receiving
> block blk_-850157504086372990 src: /9.26.118.116:33034 dest:
> /9.26.118.116:50200
> 2008-08-26 01:27:03,934 INFO org.apache.hadoop.dfs.DataNode: writeBlock
> blk_-850157504086372990 received exception
> java.net.NoRouteToHostException: No route
> to host
> 2008-08-26 01:27:03,936 ERROR org.apache.hadoop.dfs.DataNode:
> 9.26.118.116:50200:DataXceiver: java.net.NoRouteToHostException: No route
> to host
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:527)
> at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
> at
> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1150)
> at
> org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:994)
> at java.lang.Thread.run(Thread.java:595)
>
> 2008-08-26 01:27:09,946 INFO org.apache.hadoop.dfs.DataNode: Receiving
> block blk_841485342076617732 src: /9.26.118.116:33039 dest:
> /9.26.118.116:50200
> 2008-08-26 01:27:10,013 INFO org.apache.hadoop.dfs.DataNode: writeBlock
> blk_841485342076617732 received exception java.net.NoRouteToHostException:
> No route t
> o host
> 2008-08-26 01:27:10,014 ERROR org.apache.hadoop.dfs.DataNode:
> 9.26.118.116:50200:DataXceiver: java.net.NoRouteToHostException: No route
> to host
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:527)
> at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
> at
> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1150)
> at
> org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:994)
> at java.lang.Thread.run(Thread.java:595)
>
> 2008-08-26 01:27:16,032 INFO org.apache.hadoop.dfs.DataNode: Receiving
> block blk_-8082476515595048888 src: /9.26.118.116:33044 dest:
> /9.26.118.116:50200
> 2008-08-26 01:27:16,099 INFO org.apache.hadoop.dfs.DataNode: writeBlock
> blk_-8082476515595048888 received exception
> java.net.NoRouteToHostException: No route
> to host
> 2008-08-26 01:27:16,100 ERROR org.apache.hadoop.dfs.DataNode:
> 9.26.118.116:50200:DataXceiver: java.net.NoRouteToHostException: No route
> to host
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:527)
> at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
> at
> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1150)
> at
> org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:994)
> at java.lang.Thread.run(Thread.java:595)
>
> 2008-08-26 01:27:22,109 INFO org.apache.hadoop.dfs.DataNode: Receiving
> block blk_-1811482700203630462 src: /9.26.118.116:33049 dest:
> /9.26.118.116:50200
> 2008-08-26 01:27:22,176 INFO org.apache.hadoop.dfs.DataNode: writeBlock
> blk_-1811482700203630462 received exception
> java.net.NoRouteToHostException: No route
> to host
> 2008-08-26 01:27:22,177 ERROR org.apache.hadoop.dfs.DataNode:
> 9.26.118.116:50200:DataXceiver: java.net.NoRouteToHostException: No route
> to host
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:527)
> at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100)
> at
> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1150)
> at
> org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:994)
> at java.lang.Thread.run(Thread.java:595)
>
> 2008-08-26 01:38:37,930 INFO org.apache.hadoop.dfs.DataNode: BlockReport
> of 0 blocks got processed in 1 msecs
>
> Slave's Datanode Log:
>
> 2008-08-26 01:13:06,524 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG: host = siva/9.41.83.82
> STARTUP_MSG: args = []
> STARTUP_MSG: version = 0.17.2
> STARTUP_MSG: build =
> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r
> 684969; compiled by 'oom' on Wed Aug 13 05:29:53 UTC 2008
> ************************************************************/
> 2008-08-26 01:13:06,841 INFO org.apache.hadoop.dfs.Storage: Storage
> directory /tmp/hadoop-vishnu/dfs/data is not formatted.
> 2008-08-26 01:13:06,841 INFO org.apache.hadoop.dfs.Storage: Formatting ...
> 2008-08-26 01:13:06,857 INFO org.apache.hadoop.dfs.DataNode: Registered
> FSDatasetStatusMBean
> 2008-08-26 01:13:06,859 INFO org.apache.hadoop.dfs.DataNode: Opened server
> at 50200
> 2008-08-26 01:13:06,860 INFO org.apache.hadoop.dfs.DataNode: Balancing
> bandwith is 1048576 bytes/s
> 2008-08-26 01:13:06,906 INFO org.mortbay.util.Credential: Checking
> Resource aliases
> 2008-08-26 01:13:06,962 INFO org.mortbay.http.HttpServer: Version
> Jetty/5.1.4
> 2008-08-26 01:13:07,256 INFO org.mortbay.util.Container: Started
> org.mortbay.jetty.servlet.WebApplicationHandler@52458f41
> 2008-08-26 01:13:07,281 INFO org.mortbay.util.Container: Started
> WebApplicationContext[/,/]
> 2008-08-26 01:13:07,281 INFO org.mortbay.util.Container: Started
> HttpContext[/logs,/logs]
> 2008-08-26 01:13:07,281 INFO org.mortbay.util.Container: Started
> HttpContext[/static,/static]
> 2008-08-26 01:13:07,283 INFO org.mortbay.http.SocketListener: Started
> SocketListener on 0.0.0.0:50075
> 2008-08-26 01:13:07,283 INFO org.mortbay.util.Container: Started
> org.mortbay.jetty.Server@125b8827
> 2008-08-26 01:13:07,286 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2008-08-26 01:13:27,260 INFO org.apache.hadoop.dfs.DataNode: New storage
> id DS-1768263747-9.41.83.82-50200-1219727607112 is assigned to data-node
> 9.41.83.82:
> 50200
> 2008-08-26 01:13:27,260 INFO org.apache.hadoop.dfs.DataNode:
> 9.41.83.82:50200In DataNode.run, data =
> FSDataset{dirpath='/tmp/hadoop-vishnu/dfs/data/current'}
> 2008-08-26 01:13:27,261 INFO org.apache.hadoop.dfs.DataNode: using
> BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 60000msec
> 2008-08-26 01:14:06,572 INFO org.apache.hadoop.dfs.DataNode: BlockReport
> of 0 blocks got processed in 73 msecs
>
>
> Thanks for the help,
>
> Prasad Vishnubhotla
>
--
View this message in context: http://www.nabble.com/Error-%22Bad-connect-ack-with-firstBadLink%22-when-using-hadoop-0.17.2-tp19156180p21386347.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.