You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by "Kale, Chetan" <Ch...@garmin.com> on 2010/03/16 18:51:43 UTC

random HDFS exceptions

I am getting exceptions when trying to copy a file to HDFS. My steps are as follows -

bin/hadoop fs -rm /input/*
bin/hadoop fs -put 2010-03-15.gar /input/2010-03-15.gar

Following is the error while doing the put. I am running these steps once a day but I don't see these errors on every run. I am running a pseudo-cluster on a commodity server. Can anyone point me to why this may be happening and if there is any fix or workaround?


10/03/16 01:27:51 INFO hdfs.DFSClient: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not replicated yet:/input/2010-03-15.gar
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1253)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

        at org.apache.hadoop.ipc.Client.call(Client.java:739)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
        at $Proxy0.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at $Proxy0.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)

10/03/16 01:27:51 WARN hdfs.DFSClient: NotReplicatedYetException sleeping /input/2010-03-15.gar retries left 4
10/03/16 01:27:51 INFO hdfs.DFSClient: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not replicated yet:/input/2010-03-15.gar
{Above stack trace}

10/03/16 01:27:51 WARN hdfs.DFSClient: NotReplicatedYetException sleeping /input/2010-03-15.gar retries left 3
10/03/16 01:27:52 INFO hdfs.DFSClient: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not replicated yet:/input/2010-03-15.gar
{Above stack trace}

10/03/16 01:27:52 WARN hdfs.DFSClient: NotReplicatedYetException sleeping /input/2010-03-15.gar retries left 2
10/03/16 01:27:54 INFO hdfs.DFSClient: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not replicated yet:/input/2010-03-15.gar
{Above stack trace}

10/03/16 01:27:54 WARN hdfs.DFSClient: NotReplicatedYetException sleeping /input/2010-03-15.gar retries left 1
10/03/16 01:27:57 INFO hdfs.DFSClient: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not replicated yet:/input/2010-03-15.gar
{Above stack trace}

10/03/16 01:27:57 INFO hdfs.DFSClient: Waiting for replication for 6 seconds
10/03/16 01:27:57 WARN hdfs.DFSClient: NotReplicatedYetException sleeping /input/2010-03-15.gar retries left 0
10/03/16 01:28:03 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not replicated yet:/input/2010-03-15.gar
{Above stack trace}

10/03/16 01:28:03 WARN hdfs.DFSClient: Error Recovery for block blk_-711466278859187497_1379 bad datanode[0] nodes == null
10/03/16 01:28:03 WARN hdfs.DFSClient: Could not get block locations. Source file "/input/2010-03-15.gar" - Aborting...
put: org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not replicated yet:/input/2010-03-15.gar


--------------------------------------------------------------------------------

This e-mail and any attachments may contain confidential material for the sole use of the intended recipient. If you are not the intended recipient, please be aware that any disclosure, copying, distribution or use of this e-mail or any attachment is prohibited. If you have received this e-mail in error, please contact the sender and delete all copies.



Thank you for your cooperation.

Re: random HDFS exceptions

Posted by Eguzki Astiz Lezaun <eg...@tid.es>.
Hi,

I am having the same issue. We are using Cludera's release CDH 2 (based 
on Hadoop 0.20).

I attach dfsadmin -report command output as well.

Any clue about what's going on?

Thanks in advance for your time.

Eguzki

10/03/17 11:59:10 INFO hdfs.DFSClient: Exception in 
createBlockOutputStream java.net.SocketTimeoutException: 66000 millis 
timeout while waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/10.95.105.135:47193 
remote=/10.95.105.135:50010]
10/03/17 11:59:11 INFO hdfs.DFSClient: Abandoning block 
blk_-1668062316868548074_2480
10/03/17 11:59:11 INFO hdfs.DFSClient: Waiting to find target node: 
10.95.105.135:50010
10/03/17 12:01:43 INFO hdfs.DFSClient: Exception in 
createBlockOutputStream java.io.IOException: Bad connect ack with 
firstBadLink 10.95.102.51:50010
10/03/17 12:01:43 INFO hdfs.DFSClient: Abandoning block 
blk_-2470661805829934127_2480
10/03/17 12:01:45 INFO hdfs.DFSClient: Waiting to find target node: 
10.95.105.49:50010
10/03/17 12:26:53 INFO hdfs.DFSClient: Exception in 
createBlockOutputStream java.io.IOException: Bad connect ack with 
firstBadLink 10.95.106.44:50010
10/03/17 12:26:53 INFO hdfs.DFSClient: Abandoning block 
blk_-7372214162798128356_2514
10/03/17 12:26:53 INFO hdfs.DFSClient: Waiting to find target node: 
10.95.105.49:50010
10/03/17 12:37:12 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor 
exception  for block blk_956409610367129117_2524java.io.IOException: Bad 
response 1 for block blk_956409610367129117_2524 from datanode 
10.95.105.49:50010
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2440)

10/03/17 12:37:12 WARN hdfs.DFSClient: Error Recovery for block 
blk_956409610367129117_2524 bad datanode[1] 10.95.105.49:50010
10/03/17 12:37:12 WARN hdfs.DFSClient: Error Recovery for block 
blk_956409610367129117_2524 in pipeline 10.95.103.49:50010, 
10.95.105.49:50010: bad datanode 10.95.105.49:50010
10/03/17 14:36:58 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor 
exception  for block 
blk_-6850302974681544649_2582java.net.SocketTimeoutException: 66000 
millis timeout while waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/10.95.105.135:39856 
remote=/10.95.105.49:50010]
    at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
    at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.DataInputStream.readFully(DataInputStream.java:178)
    at java.io.DataInputStream.readLong(DataInputStream.java:399)
    at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:119)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2410)

10/03/17 14:36:59 WARN hdfs.DFSClient: Error Recovery for block 
blk_-6850302974681544649_2582 bad datanode[0] 10.95.105.49:50010
10/03/17 14:36:59 WARN hdfs.DFSClient: Error Recovery for block 
blk_-6850302974681544649_2582 in pipeline 10.95.105.49:50010, 
10.95.103.49:50010: bad datanode 10.95.105.49:50010
10/03/17 15:36:13 INFO hdfs.DFSClient: Exception in 
createBlockOutputStream java.io.IOException: Bad connect ack with 
firstBadLink 10.95.105.49:50010
10/03/17 15:36:13 INFO hdfs.DFSClient: Abandoning block 
blk_5964459336033715241_2600
10/03/17 15:36:13 INFO hdfs.DFSClient: Waiting to find target node: 
10.95.103.49:50010
10/03/17 15:47:57 INFO hdfs.DFSClient: 
org.apache.hadoop.ipc.RemoteException: 
org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not 
replicated yet:/user/perserver/data/575Gb/ps_es_mstore_events_fact.txt
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1268)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
    at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:966)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:962)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:960)

    at org.apache.hadoop.ipc.Client.call(Client.java:740)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy0.addBlock(Unknown Source)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2932)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2807)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2087)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2274)

10/03/17 15:47:58 WARN hdfs.DFSClient: NotReplicatedYetException 
sleeping /user/perserver/data/575Gb/ps_es_mstore_events_fact.txt retries 
left 4
10/03/17 15:47:58 INFO hdfs.DFSClient: 
org.apache.hadoop.ipc.RemoteException: 
org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not 
replicated yet:/user/perserver/data/575Gb/ps_es_mstore_events_fact.txt
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1268)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
    at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:966)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:962)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:960)

    at org.apache.hadoop.ipc.Client.call(Client.java:740)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy0.addBlock(Unknown Source)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2932)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2807)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2087)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2274)

10/03/17 15:47:58 WARN hdfs.DFSClient: NotReplicatedYetException 
sleeping /user/perserver/data/575Gb/ps_es_mstore_events_fact.txt retries 
left 3
10/03/17 15:47:59 INFO hdfs.DFSClient: 
org.apache.hadoop.ipc.RemoteException: 
org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not 
replicated yet:/user/perserver/data/575Gb/ps_es_mstore_events_fact.txt
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1268)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
    at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:966)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:962)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:960)

    at org.apache.hadoop.ipc.Client.call(Client.java:740)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy0.addBlock(Unknown Source)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2932)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2807)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2087)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2274)

10/03/17 15:47:59 WARN hdfs.DFSClient: NotReplicatedYetException 
sleeping /user/perserver/data/575Gb/ps_es_mstore_events_fact.txt retries 
left 2
10/03/17 15:48:00 INFO hdfs.DFSClient: 
org.apache.hadoop.ipc.RemoteException: 
org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not 
replicated yet:/user/perserver/data/575Gb/ps_es_mstore_events_fact.txt
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1268)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
    at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:966)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:962)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:960)

    at org.apache.hadoop.ipc.Client.call(Client.java:740)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy0.addBlock(Unknown Source)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2932)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2807)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2087)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2274)

10/03/17 15:48:00 WARN hdfs.DFSClient: NotReplicatedYetException 
sleeping /user/perserver/data/575Gb/ps_es_mstore_events_fact.txt retries 
left 1
10/03/17 15:48:04 INFO hdfs.DFSClient: 
org.apache.hadoop.ipc.RemoteException: 
org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not 
replicated yet:/user/perserver/data/575Gb/ps_es_mstore_events_fact.txt
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1268)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
    at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:966)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:962)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:960)

    at org.apache.hadoop.ipc.Client.call(Client.java:740)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy0.addBlock(Unknown Source)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2932)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2807)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2087)
    at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2274)



dfsadmin -report output:



[hadoop@hclusternn ~]$ hadoop dfsadmin -report
Configured Capacity: 13455070146560 (12.24 TB)
Present Capacity: 12031786528768 (10.94 TB)
DFS Remaining: 10223303581696 (9.3 TB)
DFS Used: 1808482947072 (1.64 TB)
DFS Used%: 15.03%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 5 (5 total, 0 dead)

Name: 10.95.105.135:50010
Rack: /cpdbcn/rack4
Decommission Status : Normal
Configured Capacity: 1902826799104 (1.73 TB)
DFS Used: 353485250560 (329.21 GB)
Non DFS Used: 738641940480 (687.91 GB)
DFS Remaining: 810699608064(755.02 GB)
DFS Used%: 18.58%
DFS Remaining%: 42.61%
Last contact: Thu Mar 18 16:07:08 CET 2010


Name: 10.95.102.51:50010
Rack: /cpdbcn/rack4
Decommission Status : Normal
Configured Capacity: 1927399600128 (1.75 TB)
DFS Used: 299213541376 (278.66 GB)
Non DFS Used: 122673991680 (114.25 GB)
DFS Remaining: 1505512067072(1.37 TB)
DFS Used%: 15.52%
DFS Remaining%: 78.11%
Last contact: Thu Mar 18 16:07:09 CET 2010


Name: 10.95.106.44:50010
Rack: /cpdbcn/rack4
Decommission Status : Normal
Configured Capacity: 1919208640512 (1.75 TB)
DFS Used: 289110945792 (269.26 GB)
Non DFS Used: 114504093696 (106.64 GB)
DFS Remaining: 1515593601024(1.38 TB)
DFS Used%: 15.06%
DFS Remaining%: 78.97%
Last contact: Thu Mar 18 16:07:07 CET 2010


Name: 10.95.105.49:50010
Rack: /cpdbcn/rack4
Decommission Status : Normal
Configured Capacity: 5778235506688 (5.26 TB)
DFS Used: 396275163136 (369.06 GB)
Non DFS Used: 322663055360 (300.5 GB)
DFS Remaining: 5059297288192(4.6 TB)
DFS Used%: 6.86%
DFS Remaining%: 87.56%
Last contact: Thu Mar 18 16:07:08 CET 2010


Name: 10.95.103.49:50010
Rack: /cpdbcn/rack4
Decommission Status : Normal
Configured Capacity: 1927399600128 (1.75 TB)
DFS Used: 470398046208 (438.09 GB)
Non DFS Used: 124800536576 (116.23 GB)
DFS Remaining: 1332201017344(1.21 TB)
DFS Used%: 24.41%
DFS Remaining%: 69.12%

Re: random HDFS exceptions

Posted by Wang Xu <gn...@gmail.com>.
On Wed, Mar 17, 2010 at 1:51 AM, Kale, Chetan <Ch...@garmin.com> wrote:
> I am getting exceptions when trying to copy a file to HDFS. My steps are as follows -
>
> bin/hadoop fs -rm /input/*
> bin/hadoop fs -put 2010-03-15.gar /input/2010-03-15.gar
>
> Following is the error while doing the put. I am running these steps once a day but I don't see these errors on every run. I am running a pseudo-cluster on a commodity server. Can anyone point me to why this may be happening and if there is any fix or workaround?

Do you have DataNode proper running? What's the result with this:
bin/hadoop dfsadmin -report

-- 
Wang Xu