You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by André Martin <ma...@andremartin.de> on 2008/02/29 23:42:41 UTC

java.io.IOException: Could not complete write to file // hadoop-0.16.0

Hi everyone,
I'm seeing the above exception on my DFS clients:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not 
> complete write to file 
> /seDNS/mapred-out/17A83EC8CC5DD15549657DF36CA9F3236EC121DB/acengineering-kz-20080229232339788-756.dns 
> by DFSClient_1073032245
>     at org.apache.hadoop.dfs.NameNode.complete(NameNode.java:341)
>     at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>     at java.lang.reflect.Method.invoke(Unknown Source)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:910)
>     at org.apache.hadoop.ipc.Client.call(Client.java:512)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
>     at org.apache.hadoop.dfs.$Proxy0.complete(Unknown Source)
>     at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>     at java.lang.reflect.Method.invoke(Unknown Source)
>     at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>     at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>     at org.apache.hadoop.dfs.$Proxy0.complete(Unknown Source)
>     at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.close(DFSClient.java:2288)
>     at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:51)
>     at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:67)
Any idea why this exception is thrown?
Thx in advance.

Cu on the 'net,
                        Bye - bye,

                                   <<<<< André <<<< >>>> èrbnA >>>>>




Re: java.io.IOException: Could not complete write to file // hadoop-0.16.0

Posted by André Martin <ma...@andremartin.de>.
Hi everyone,
I did a some deeper investigation this morning and found out the following:
One of my crawlers created/opened a file using the create method and the 
namenode allocates appropriate blocks for this file.
Then another crawler opens a file with the same file name (no exception 
is thrown). The namenode allocates again appropriate blocks for this 
file and seems to invalidate the ones from the first create.
In the meantime the first crawler puts same data into the stream without 
any exceptions, except when the crawler tries to close the stream - then 
the exceptions below/previous posts are thrown.
Can anybody verify that behavior?
Thx and have a great weekend.

Cu on the 'net,
                      Bye - bye,

                                 <<<<< André <<<< >>>> èrbnA >>>>>

Chris K Wensel wrote:
> I think I've seen that in EC2 where the group wasn't authorized to 
> connect to itself. but it was obvious things were wrong as no nodes 
> where showing up as they booted.
>
> since then, I may have seen something similar intermittently in EC2 on 
> 0.16.0, but many other issues prevented me from pursuing it.
>
> bottom line it looks like a network issue.
>
> On Feb 29, 2008, at 2:48 PM, André Martin wrote:
>
>> Also, before the "Could not complete write to file" exception shows 
>> up - I see the following exception in my logs:
>>> java.io.IOException: Could not get block locations. Aborting...
>>>    at 
>>> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:1824) 
>>>
>>>    at 
>>> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1100(DFSClient.java:1479) 
>>>
>>>    at 
>>> org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1571) 
>>>
>> André Martin wrote:
>>> Hi everyone,
>>> I'm seeing the above exception on my DFS clients:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could 
>>>> not complete write to file 
>>>> /seDNS/mapred-out/17A83EC8CC5DD15549657DF36CA9F3236EC121DB/acengineering-kz-20080229232339788-756.dns 
>>>> by DFSClient_1073032245
>>>>    at org.apache.hadoop.dfs.NameNode.complete(NameNode.java:341)
>>>>    at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
>>>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>>>>    at java.lang.reflect.Method.invoke(Unknown Source)
>>>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
>>>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:910)
>>>>    at org.apache.hadoop.ipc.Client.call(Client.java:512)
>>>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
>>>>    at org.apache.hadoop.dfs.$Proxy0.complete(Unknown Source)
>>>>    at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>>>>    at java.lang.reflect.Method.invoke(Unknown Source)
>>>>    at 
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) 
>>>>
>>>>    at 
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) 
>>>>
>>>>    at org.apache.hadoop.dfs.$Proxy0.complete(Unknown Source)
>>>>    at 
>>>> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.close(DFSClient.java:2288) 
>>>>
>>>>    at 
>>>> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:51) 
>>>>
>>>>    at 
>>>> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:67) 
>>>>
>>> Any idea why this exception is thrown?
>>> Thx in advance.
>>>
>>> Cu on the 'net,
>>>                       Bye - bye,
>>>
>>>                                  <<<<< André <<<< >>>> èrbnA >>>>>
>>
>>
>
> Chris K Wensel
> chris@wensel.net
> http://chris.wensel.net/
>
>
>



Re: java.io.IOException: Could not complete write to file // hadoop-0.16.0

Posted by Chris K Wensel <ch...@wensel.net>.
I think I've seen that in EC2 where the group wasn't authorized to  
connect to itself. but it was obvious things were wrong as no nodes  
where showing up as they booted.

since then, I may have seen something similar intermittently in EC2 on  
0.16.0, but many other issues prevented me from pursuing it.

bottom line it looks like a network issue.

On Feb 29, 2008, at 2:48 PM, André Martin wrote:

> Also, before the "Could not complete write to file" exception shows  
> up - I see the following exception in my logs:
>> java.io.IOException: Could not get block locations. Aborting...
>>    at org.apache.hadoop.dfs.DFSClient 
>> $DFSOutputStream.processDatanodeError(DFSClient.java:1824)
>>    at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access 
>> $1100(DFSClient.java:1479)
>>    at org.apache.hadoop.dfs.DFSClient$DFSOutputStream 
>> $DataStreamer.run(DFSClient.java:1571)
> André Martin wrote:
>> Hi everyone,
>> I'm seeing the above exception on my DFS clients:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could  
>>> not complete write to file /seDNS/mapred-out/ 
>>> 17A83EC8CC5DD15549657DF36CA9F3236EC121DB/acengineering- 
>>> kz-20080229232339788-756.dns by DFSClient_1073032245
>>>    at org.apache.hadoop.dfs.NameNode.complete(NameNode.java:341)
>>>    at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
>>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown  
>>> Source)
>>>    at java.lang.reflect.Method.invoke(Unknown Source)
>>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
>>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:910)
>>>    at org.apache.hadoop.ipc.Client.call(Client.java:512)
>>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
>>>    at org.apache.hadoop.dfs.$Proxy0.complete(Unknown Source)
>>>    at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown  
>>> Source)
>>>    at java.lang.reflect.Method.invoke(Unknown Source)
>>>    at  
>>> org 
>>> .apache 
>>> .hadoop 
>>> .io 
>>> .retry 
>>> .RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>    at  
>>> org 
>>> .apache 
>>> .hadoop 
>>> .io 
>>> .retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>    at org.apache.hadoop.dfs.$Proxy0.complete(Unknown Source)
>>>    at org.apache.hadoop.dfs.DFSClient 
>>> $DFSOutputStream.close(DFSClient.java:2288)
>>>    at org.apache.hadoop.fs.FSDataOutputStream 
>>> $PositionCache.close(FSDataOutputStream.java:51)
>>>    at  
>>> org 
>>> .apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java: 
>>> 67)
>> Any idea why this exception is thrown?
>> Thx in advance.
>>
>> Cu on the 'net,
>>                       Bye - bye,
>>
>>                                  <<<<< André <<<< >>>> èrbnA >>>>>
>
>

Chris K Wensel
chris@wensel.net
http://chris.wensel.net/




Re: java.io.IOException: Could not complete write to file // hadoop-0.16.0

Posted by André Martin <ma...@andremartin.de>.
Also, before the "Could not complete write to file" exception shows up - 
I see the following exception in my logs:
> java.io.IOException: Could not get block locations. Aborting...
>     at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:1824)
>     at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1100(DFSClient.java:1479)
>     at 
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1571) 
André Martin wrote:
> Hi everyone,
> I'm seeing the above exception on my DFS clients:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not 
>> complete write to file 
>> /seDNS/mapred-out/17A83EC8CC5DD15549657DF36CA9F3236EC121DB/acengineering-kz-20080229232339788-756.dns 
>> by DFSClient_1073032245
>>     at org.apache.hadoop.dfs.NameNode.complete(NameNode.java:341)
>>     at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
>>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>>     at java.lang.reflect.Method.invoke(Unknown Source)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:910)
>>     at org.apache.hadoop.ipc.Client.call(Client.java:512)
>>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
>>     at org.apache.hadoop.dfs.$Proxy0.complete(Unknown Source)
>>     at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>>     at java.lang.reflect.Method.invoke(Unknown Source)
>>     at 
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) 
>>
>>     at 
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) 
>>
>>     at org.apache.hadoop.dfs.$Proxy0.complete(Unknown Source)
>>     at 
>> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.close(DFSClient.java:2288) 
>>
>>     at 
>> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:51) 
>>
>>     at 
>> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:67) 
>>
> Any idea why this exception is thrown?
> Thx in advance.
>
> Cu on the 'net,
>                        Bye - bye,
>
>                                   <<<<< André <<<< >>>> èrbnA >>>>>