You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Amandeep Khurana <am...@gmail.com> on 2009/03/11 02:27:17 UTC

Error while putting data onto hdfs

I was trying to put a 1 gig file onto HDFS and I got the following error:

09/03/10 18:23:16 WARN hdfs.DFSClient: DataStreamer Exception:
java.net.SocketTimeoutException: 5000 millis timeout while waiting for
channel to be ready for write. ch :
java.nio.channels.SocketChannel[connected local=/171.69.102.53:34414remote=/
171.69.102.51:50010]
    at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:162)
    at
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
    at
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
    at java.io.BufferedOutputStream.write(Unknown Source)
    at java.io.DataOutputStream.write(Unknown Source)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2209)

09/03/10 18:23:16 WARN hdfs.DFSClient: Error Recovery for block
blk_2971879428934911606_36678 bad datanode[0] 171.69.102.51:50010
put: All datanodes 171.69.102.51:50010 are bad. Aborting...
Exception closing file /user/amkhuran/221rawdata/1g
java.io.IOException: Filesystem closed
    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:198)
    at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3084)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3053)
    at
org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:942)
    at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:210)
    at
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:243)
    at org.apache.hadoop.fs.FsShell.close(FsShell.java:1842)
    at org.apache.hadoop.fs.FsShell.main(FsShell.java:1856)


Whats going wrong?

Amandeep


Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz

Re: Error while putting data onto hdfs

Posted by Raghu Angadi <ra...@yahoo-inc.com>.
Amandeep Khurana wrote:
> What happens if you set it to 0? How is it a workaround? 

HBase needs it in pre-19.0 (related story : 
http://www.nabble.com/Datanode-Xceivers-td21372227.html). It should not 
matter if you move to 0.19.0 or newer.

> And how would it
> matter if I change is to a large value?

very large value like 100 years is same as setting it to 0 (for all 
practical purposes).

Raghu.


> 
> Amandeep Khurana
> Computer Science Graduate Student
> University of California, Santa Cruz
> 
> 
> On Wed, Mar 11, 2009 at 12:00 PM, Raghu Angadi <ra...@yahoo-inc.com>wrote:
> 
>> Amandeep Khurana wrote:
>>
>>> My dfs.datanode.socket.write.timeout is set to 0. This had to be done to
>>> get
>>> Hbase to work.
>>>
>> ah.. I see, we should fix that. Not sure how others haven't seen it till
>> now. Affects only those with write.timeout set to 0 on the clients.
>>
>> Since setting it to 0 itself is a work around, please change that to some
>> extremely large value for now.
>>
>> Raghu.
>>
>>
>>
>>> Amandeep Khurana
>>> Computer Science Graduate Student
>>> University of California, Santa Cruz
>>>
>>>
>>> On Wed, Mar 11, 2009 at 10:23 AM, Raghu Angadi <rangadi@yahoo-inc.com
>>>> wrote:
>>>  Did you change dfs.datanode.socket.write.timeout to 5 seconds? The
>>>> exception message says so. It is extremely small.
>>>>
>>>> The default is 8 minutes and is intentionally pretty high. Its purpose is
>>>> mainly to catch extremely unresponsive datanodes and other network
>>>> issues.
>>>>
>>>> Raghu.
>>>>
>>>>
>>>> Amandeep Khurana wrote:
>>>>
>>>>  I was trying to put a 1 gig file onto HDFS and I got the following
>>>>> error:
>>>>>
>>>>> 09/03/10 18:23:16 WARN hdfs.DFSClient: DataStreamer Exception:
>>>>> java.net.SocketTimeoutException: 5000 millis timeout while waiting for
>>>>> channel to be ready for write. ch :
>>>>> java.nio.channels.SocketChannel[connected local=/171.69.102.53:34414
>>>>> remote=/
>>>>> 171.69.102.51:50010]
>>>>>   at
>>>>>
>>>>>
>>>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:162)
>>>>>   at
>>>>>
>>>>>
>>>>> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>>>>>   at
>>>>>
>>>>>
>>>>> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>>>>>   at java.io.BufferedOutputStream.write(Unknown Source)
>>>>>   at java.io.DataOutputStream.write(Unknown Source)
>>>>>   at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2209)
>>>>>
>>>>> 09/03/10 18:23:16 WARN hdfs.DFSClient: Error Recovery for block
>>>>> blk_2971879428934911606_36678 bad datanode[0] 171.69.102.51:50010
>>>>> put: All datanodes 171.69.102.51:50010 are bad. Aborting...
>>>>> Exception closing file /user/amkhuran/221rawdata/1g
>>>>> java.io.IOException: Filesystem closed
>>>>>   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:198)
>>>>>   at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
>>>>>   at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3084)
>>>>>   at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3053)
>>>>>   at
>>>>> org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:942)
>>>>>   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:210)
>>>>>   at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:243)
>>>>>   at org.apache.hadoop.fs.FsShell.close(FsShell.java:1842)
>>>>>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:1856)
>>>>>
>>>>>
>>>>> Whats going wrong?
>>>>>
>>>>> Amandeep
>>>>>
>>>>>
>>>>> Amandeep Khurana
>>>>> Computer Science Graduate Student
>>>>> University of California, Santa Cruz
>>>>>
>>>>>
>>>>>
> 


Re: Error while putting data onto hdfs

Posted by Amandeep Khurana <am...@gmail.com>.
What happens if you set it to 0? How is it a workaround? And how would it
matter if I change is to a large value?


Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz


On Wed, Mar 11, 2009 at 12:00 PM, Raghu Angadi <ra...@yahoo-inc.com>wrote:

> Amandeep Khurana wrote:
>
>> My dfs.datanode.socket.write.timeout is set to 0. This had to be done to
>> get
>> Hbase to work.
>>
>
> ah.. I see, we should fix that. Not sure how others haven't seen it till
> now. Affects only those with write.timeout set to 0 on the clients.
>
> Since setting it to 0 itself is a work around, please change that to some
> extremely large value for now.
>
> Raghu.
>
>
>
>> Amandeep Khurana
>> Computer Science Graduate Student
>> University of California, Santa Cruz
>>
>>
>> On Wed, Mar 11, 2009 at 10:23 AM, Raghu Angadi <rangadi@yahoo-inc.com
>> >wrote:
>>
>>  Did you change dfs.datanode.socket.write.timeout to 5 seconds? The
>>> exception message says so. It is extremely small.
>>>
>>> The default is 8 minutes and is intentionally pretty high. Its purpose is
>>> mainly to catch extremely unresponsive datanodes and other network
>>> issues.
>>>
>>> Raghu.
>>>
>>>
>>> Amandeep Khurana wrote:
>>>
>>>  I was trying to put a 1 gig file onto HDFS and I got the following
>>>> error:
>>>>
>>>> 09/03/10 18:23:16 WARN hdfs.DFSClient: DataStreamer Exception:
>>>> java.net.SocketTimeoutException: 5000 millis timeout while waiting for
>>>> channel to be ready for write. ch :
>>>> java.nio.channels.SocketChannel[connected local=/171.69.102.53:34414
>>>> remote=/
>>>> 171.69.102.51:50010]
>>>>   at
>>>>
>>>>
>>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:162)
>>>>   at
>>>>
>>>>
>>>> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>>>>   at
>>>>
>>>>
>>>> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>>>>   at java.io.BufferedOutputStream.write(Unknown Source)
>>>>   at java.io.DataOutputStream.write(Unknown Source)
>>>>   at
>>>>
>>>>
>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2209)
>>>>
>>>> 09/03/10 18:23:16 WARN hdfs.DFSClient: Error Recovery for block
>>>> blk_2971879428934911606_36678 bad datanode[0] 171.69.102.51:50010
>>>> put: All datanodes 171.69.102.51:50010 are bad. Aborting...
>>>> Exception closing file /user/amkhuran/221rawdata/1g
>>>> java.io.IOException: Filesystem closed
>>>>   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:198)
>>>>   at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
>>>>   at
>>>>
>>>>
>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3084)
>>>>   at
>>>>
>>>>
>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3053)
>>>>   at
>>>> org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:942)
>>>>   at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:210)
>>>>   at
>>>>
>>>>
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:243)
>>>>   at org.apache.hadoop.fs.FsShell.close(FsShell.java:1842)
>>>>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:1856)
>>>>
>>>>
>>>> Whats going wrong?
>>>>
>>>> Amandeep
>>>>
>>>>
>>>> Amandeep Khurana
>>>> Computer Science Graduate Student
>>>> University of California, Santa Cruz
>>>>
>>>>
>>>>
>>
>

Re: Error while putting data onto hdfs

Posted by Raghu Angadi <ra...@yahoo-inc.com>.
Raghu Angadi wrote:
> Amandeep Khurana wrote:
>> My dfs.datanode.socket.write.timeout is set to 0. This had to be done 
>> to get
>> Hbase to work.
> 
> ah.. I see, we should fix that. Not sure how others haven't seen it till 
> now. Affects only those with write.timeout set to 0 on the clients.

filed : https://issues.apache.org/jira/browse/HADOOP-5464


> Since setting it to 0 itself is a work around, please change that to 
> some extremely large value for now.
> 
> Raghu.
> 

Re: Error while putting data onto hdfs

Posted by Raghu Angadi <ra...@yahoo-inc.com>.
Amandeep Khurana wrote:
> My dfs.datanode.socket.write.timeout is set to 0. This had to be done to get
> Hbase to work.

ah.. I see, we should fix that. Not sure how others haven't seen it till 
now. Affects only those with write.timeout set to 0 on the clients.

Since setting it to 0 itself is a work around, please change that to 
some extremely large value for now.

Raghu.

> 
> Amandeep Khurana
> Computer Science Graduate Student
> University of California, Santa Cruz
> 
> 
> On Wed, Mar 11, 2009 at 10:23 AM, Raghu Angadi <ra...@yahoo-inc.com>wrote:
> 
>> Did you change dfs.datanode.socket.write.timeout to 5 seconds? The
>> exception message says so. It is extremely small.
>>
>> The default is 8 minutes and is intentionally pretty high. Its purpose is
>> mainly to catch extremely unresponsive datanodes and other network issues.
>>
>> Raghu.
>>
>>
>> Amandeep Khurana wrote:
>>
>>> I was trying to put a 1 gig file onto HDFS and I got the following error:
>>>
>>> 09/03/10 18:23:16 WARN hdfs.DFSClient: DataStreamer Exception:
>>> java.net.SocketTimeoutException: 5000 millis timeout while waiting for
>>> channel to be ready for write. ch :
>>> java.nio.channels.SocketChannel[connected local=/171.69.102.53:34414
>>> remote=/
>>> 171.69.102.51:50010]
>>>    at
>>>
>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:162)
>>>    at
>>>
>>> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>>>    at
>>>
>>> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>>>    at java.io.BufferedOutputStream.write(Unknown Source)
>>>    at java.io.DataOutputStream.write(Unknown Source)
>>>    at
>>>
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2209)
>>>
>>> 09/03/10 18:23:16 WARN hdfs.DFSClient: Error Recovery for block
>>> blk_2971879428934911606_36678 bad datanode[0] 171.69.102.51:50010
>>> put: All datanodes 171.69.102.51:50010 are bad. Aborting...
>>> Exception closing file /user/amkhuran/221rawdata/1g
>>> java.io.IOException: Filesystem closed
>>>    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:198)
>>>    at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
>>>    at
>>>
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3084)
>>>    at
>>>
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3053)
>>>    at
>>> org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:942)
>>>    at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:210)
>>>    at
>>>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:243)
>>>    at org.apache.hadoop.fs.FsShell.close(FsShell.java:1842)
>>>    at org.apache.hadoop.fs.FsShell.main(FsShell.java:1856)
>>>
>>>
>>> Whats going wrong?
>>>
>>> Amandeep
>>>
>>>
>>> Amandeep Khurana
>>> Computer Science Graduate Student
>>> University of California, Santa Cruz
>>>
>>>
> 


Re: Error while putting data onto hdfs

Posted by Amandeep Khurana <am...@gmail.com>.
My dfs.datanode.socket.write.timeout is set to 0. This had to be done to get
Hbase to work.


Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz


On Wed, Mar 11, 2009 at 10:23 AM, Raghu Angadi <ra...@yahoo-inc.com>wrote:

>
> Did you change dfs.datanode.socket.write.timeout to 5 seconds? The
> exception message says so. It is extremely small.
>
> The default is 8 minutes and is intentionally pretty high. Its purpose is
> mainly to catch extremely unresponsive datanodes and other network issues.
>
> Raghu.
>
>
> Amandeep Khurana wrote:
>
>> I was trying to put a 1 gig file onto HDFS and I got the following error:
>>
>> 09/03/10 18:23:16 WARN hdfs.DFSClient: DataStreamer Exception:
>> java.net.SocketTimeoutException: 5000 millis timeout while waiting for
>> channel to be ready for write. ch :
>> java.nio.channels.SocketChannel[connected local=/171.69.102.53:34414
>> remote=/
>> 171.69.102.51:50010]
>>    at
>>
>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:162)
>>    at
>>
>> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>>    at
>>
>> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>>    at java.io.BufferedOutputStream.write(Unknown Source)
>>    at java.io.DataOutputStream.write(Unknown Source)
>>    at
>>
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2209)
>>
>> 09/03/10 18:23:16 WARN hdfs.DFSClient: Error Recovery for block
>> blk_2971879428934911606_36678 bad datanode[0] 171.69.102.51:50010
>> put: All datanodes 171.69.102.51:50010 are bad. Aborting...
>> Exception closing file /user/amkhuran/221rawdata/1g
>> java.io.IOException: Filesystem closed
>>    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:198)
>>    at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
>>    at
>>
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3084)
>>    at
>>
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3053)
>>    at
>> org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:942)
>>    at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:210)
>>    at
>>
>> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:243)
>>    at org.apache.hadoop.fs.FsShell.close(FsShell.java:1842)
>>    at org.apache.hadoop.fs.FsShell.main(FsShell.java:1856)
>>
>>
>> Whats going wrong?
>>
>> Amandeep
>>
>>
>> Amandeep Khurana
>> Computer Science Graduate Student
>> University of California, Santa Cruz
>>
>>
>

Re: Error while putting data onto hdfs

Posted by Raghu Angadi <ra...@yahoo-inc.com>.
Did you change dfs.datanode.socket.write.timeout to 5 seconds? The 
exception message says so. It is extremely small.

The default is 8 minutes and is intentionally pretty high. Its purpose 
is mainly to catch extremely unresponsive datanodes and other network 
issues.

Raghu.

Amandeep Khurana wrote:
> I was trying to put a 1 gig file onto HDFS and I got the following error:
> 
> 09/03/10 18:23:16 WARN hdfs.DFSClient: DataStreamer Exception:
> java.net.SocketTimeoutException: 5000 millis timeout while waiting for
> channel to be ready for write. ch :
> java.nio.channels.SocketChannel[connected local=/171.69.102.53:34414remote=/
> 171.69.102.51:50010]
>     at
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:162)
>     at
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>     at
> org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>     at java.io.BufferedOutputStream.write(Unknown Source)
>     at java.io.DataOutputStream.write(Unknown Source)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2209)
> 
> 09/03/10 18:23:16 WARN hdfs.DFSClient: Error Recovery for block
> blk_2971879428934911606_36678 bad datanode[0] 171.69.102.51:50010
> put: All datanodes 171.69.102.51:50010 are bad. Aborting...
> Exception closing file /user/amkhuran/221rawdata/1g
> java.io.IOException: Filesystem closed
>     at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:198)
>     at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3084)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3053)
>     at
> org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:942)
>     at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:210)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:243)
>     at org.apache.hadoop.fs.FsShell.close(FsShell.java:1842)
>     at org.apache.hadoop.fs.FsShell.main(FsShell.java:1856)
> 
> 
> Whats going wrong?
> 
> Amandeep
> 
> 
> Amandeep Khurana
> Computer Science Graduate Student
> University of California, Santa Cruz
>