You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "chandravadana (JIRA)" <ji...@apache.org> on 2008/09/10 09:18:47 UTC

[jira] Commented: (HADOOP-4026) bad coonect ack with first bad link

    [ https://issues.apache.org/jira/browse/HADOOP-4026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12629727#action_12629727 ] 

chandravadana commented on HADOOP-4026:
---------------------------------------


I ve checked this with version 17.2.
Still i come accros this error..
Kindly help regarding this issue.


This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information.
If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. 
Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this email or any action taken in reliance on this e-mail is strictly 
prohibited and may be unlawful.


> bad coonect ack with first bad link
> -----------------------------------
>
>                 Key: HADOOP-4026
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4026
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.16.4
>         Environment: red hat Linux, cluster with 3 systems..
> 10.232.25.197- master
> 10.232.25.96-slave1
> 10.232.25.69-slave2
>            Reporter: chandravadana
>            Priority: Blocker
>   Original Estimate: 0.33h
>  Remaining Estimate: 0.33h
>
> wordcount/hi/ dir is the input dir 
> when i execute :
> # bin/hadoop dfs -copyFromLocal wordcount/hi wordcount/ins
> i get the foll msg
> 08/08/25 13:43:30 INFO dfs.DFSClient: Exception in
> createBlockOutputStream java.io.IOException: Bad connect ack with
> firstBadLink 10.232.25.69:50010
> 08/08/25 13:43:30 INFO dfs.DFSClient: Abandoning block
> blk_-3916191835981679734
> 08/08/25 13:43:36 INFO dfs.DFSClient: Exception in
> createBlockOutputStream java.io.IOException: Bad connect ack with
> firstBadLink 10.232.25.69:50010
> 08/08/25 13:43:36 INFO dfs.DFSClient: Abandoning block
> blk_-7058774921272589893
> 08/08/25 13:43:42 INFO dfs.DFSClient: Exception in
> createBlockOutputStream java.io.IOException: Bad connect ack with
> firstBadLink 10.232.25.69:50010
> 08/08/25 13:43:42 INFO dfs.DFSClient: Abandoning block
> blk_3767065959322874247
> 08/08/25 13:43:48 INFO dfs.DFSClient: Exception in
> createBlockOutputStream java.io.IOException: Bad connect ack with
> firstBadLink 10.232.25.69:50010
> 08/08/25 13:43:48 INFO dfs.DFSClient: Abandoning block
> blk_-8330992315825789947
> 08/08/25 13:43:54 WARN dfs.DFSClient: DataStreamer Exception:
> java.io.IOException: Unable to create new block.
> 08/08/25 13:43:54 WARN dfs.DFSClient: Error Recovery for block
> blk_-8330992315825789947 bad datanode[1]
> copyFromLocal: Could not get block locations. Aborting...
> when i examine the log file of the slave, i see this
> 2008-08-25 13:42:18,140 INFO org.apache.hadoop.dfs.DataNode:
> STARTUP_MSG: /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = slave1/10.232.25.96 STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.16.4 STARTUP_MSG:   build =
> http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.16 -r
> 652614; compiled by 'hadoopqa' on Fri May  2 00:18:12 UTC 2008
> ************************************************************/
> 2008-08-25 13:42:18,634 INFO org.apache.hadoop.dfs.Storage: Storage
> directory /etc/hadoop_install/hadoop-0.16.4/datanodedir is not
> formatted.
> 2009-08-25 13:42:18,634 INFO org.apache.hadoop.dfs.Storage:
> Formatting ...
> 2008-08-25 13:42:18,701 INFO org.apache.hadoop.dfs.DataNode: Registered
> FSDatasetStatusMBean
> 2008-08-25 13:42:18,701 INFO org.apache.hadoop.dfs.DataNode: Opened
> server at 50010
> 2008-08-25 13:42:18,705 INFO org.apache.hadoop.dfs.DataNode: Balancing
> bandwith is 1048576 bytes/s
> 2008-08-25 13:42:18,911 INFO org.mortbay.util.Credential: Checking
> Resource aliases
> 2008-08-25 13:42:19,013 INFO org.mortbay.http.HttpServer: Version
> Jetty/5.1.4 2008-08-25 13:42:19,014 INFO org.mortbay.util.Container:
> Started HttpContext[/static,/static]
> 2008-08-25 13:42:19,014 INFO org.mortbay.util.Container: Started
> HttpContext[/logs,/logs]
> 2008-08-25 13:42:19,579 INFO org.mortbay.util.Container: Started
> org.mortbay.jetty.servlet.WebApplicationHandler@11ff436
> 2008-08-25 13:42:19,658 INFO org.mortbay.util.Container: Started
> WebApplicationContext[/,/]
> 2008-08-25 13:42:19,661 INFO org.mortbay.http.SocketListener: Started
> SocketListener on 0.0.0.0:50075
> 2008-08-25 13:42:19,661 INFO org.mortbay.util.Container: Started
> org.mortbay.jetty.Server@1b8f864
> 2008-08-25 13:42:19,706 INFO org.apache.hadoop.dfs.DataNode: New storage
> id DS-860242092-10.232.25.96-50010-1219651939700 is assigned to data-
> node 10.232.25.96:50010
> 2008-08-25 13:42:19,733 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2008-08-25 13:42:19,755 INFO org.apache.hadoop.dfs.DataNode:
> 10.232.25.96:50010In DataNode.run, data = FSDataset
> {dirpath='/etc/hadoop_install/hadoop-0.16.4/datanodedir/current'}
> 2008-08-25 13:42:19,755 INFO org.apache.hadoop.dfs.DataNode: using
> BLOCKREPORT_INTERVAL of 3538776msec Initial delay: 60000msec
> 2008-08-25 13:42:19,828 INFO org.apache.hadoop.dfs.DataNode: BlockReport
> of 0 blocks got processed in 20 msecs
> 2008-08-25 13:45:43,982 INFO org.apache.hadoop.dfs.DataNode: Receiving
> block blk_1031802361447574775 src: /10.232.25.197:40282
> dest: /10.232.25.197:50010
> 2008-08-25 13:45:44,032 INFO org.apache.hadoop.dfs.DataNode: Datanode 0
> forwarding connect ack to upstream firstbadlink is
> 2008-08-25 13:45:44,081 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_1031802361447574775 of size 3161 from /10.232.25.197
> 2008-08-25 13:45:44,081 INFO org.apache.hadoop.dfs.DataNode:
> PacketResponder 0 for block blk_1031802361447574775 terminating
> 2008-08-25 13:45:44,105 INFO org.apache.hadoop.dfs.DataNode: Receiving
> block blk_-1924738157193733587 src: /10.232.25.197:40285
> dest: /10.232.25.197:50010
> 2008-08-25 13:45:44,106 INFO org.apache.hadoop.dfs.DataNode: Datanode 0
> forwarding connect ack to upstream firstbadlink is
> 2008-08-25 13:45:44,193 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_-1924738157193733587 of size 6628 from /10.232.25.197
> 2008-08-25 13:45:44,193 INFO org.apache.hadoop.dfs.DataNode:
> PacketResponder 0 for block blk_-1924738157193733587 terminating
> 2008-08-25 13:45:44,212 INFO org.apache.hadoop.dfs.DataNode: Receiving
> block blk_7001275375373078911 src: /10.232.25.197:40287
> dest: /10.232.25.197:50010
> 2008-08-25 13:45:44,213 INFO org.apache.hadoop.dfs.DataNode: Datanode 0
> forwarding connect ack to upstream firstbadlink is
> 008-08-25 13:45:44,256 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_7001275375373078911 of size 3161 from /10.232.25.197
> 2008-08-25 13:45:44,256 INFO org.apache.hadoop.dfs.DataNode:
> PacketResponder 0 for block blk_7001275375373078911 terminating
> 2008-08-25 13:45:44,277 INFO org.apache.hadoop.dfs.DataNode: Receiving
> block blk_-7471693146363669981 src: /10.232.25.197:40289
> dest: /10.232.25.197:50010
> 2008-08-25 13:45:44,278 INFO org.apache.hadoop.dfs.DataNode: Datanode 0
> forwarding connect ack to upstream firstbadlink is
> 2008-08-25 13:45:44,362 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_-7471693146363669981 of size 6628 from /10.232.25.197
> 2008-08-25 13:45:44,362 INFO org.apache.hadoop.dfs.DataNode:
> PacketResponder 0 for block blk_-7471693146363669981 terminating
> 2008-08-25 13:45:44,380 INFO org.apache.hadoop.dfs.DataNode: Receiving
> block blk_-6619078097753318750 src: /10.232.25.197:40291
> dest: /10.232.25.197:50010
> 2008-08-25 13:45:44,380 INFO org.apache.hadoop.dfs.DataNode: Datanode 0
> forwarding connect ack to upstream firstbadlink is
> 2008-08-25 13:45:44,424 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_-6619078097753318750 of size 2778 from /10.232.25.197
> 2008-08-25 13:45:44,424 INFO org.apache.hadoop.dfs.DataNode:
> PacketResponder 0 for block blk_-6619078097753318750 terminating
> 2008-08-25 13:45:44,440 INFO org.apache.hadoop.dfs.DataNode: Receiving
> block blk_1527614673854389960 src: /10.232.25.197:40293
> dest: /10.232.25.197:50010
> 2008-08-25 13:45:44,441 INFO org.apache.hadoop.dfs.DataNode: Datanode 0
> forwarding connect ack to upstream firstbadlink is
> 2008-08-25 13:45:44,526 INFO org.apache.hadoop.dfs.DataNode: Received
> block blk_1527614673854389960 of size 4616 from /10.232.25.197
> 2008-08-25 13:45:44,526 INFO org.apache.hadoop.dfs.DataNode:
> PacketResponder 0 for block blk_1527614673854389960 terminating
> 2008-08-25 13:47:21,331 INFO org.apache.hadoop.dfs.DataBlockScanner:
> Verification succeeded for blk_1527614673854389960
> 2008-08-25 13:48:11,458 INFO org.apache.hadoop.dfs.DataBlockScanner:
> Verification succeeded for blk_7001275375373078911
> i don know what changes should i make n wer exactly the problem comes from.... 
> kindly help me in resolving this issue...
> Thanks in advance.. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.