You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Owen O'Malley (JIRA)" <ji...@apache.org> on 2006/04/13 19:17:02 UTC

[jira] Commented: (HADOOP-128) Failure to replicate dfs block kills client

    [ http://issues.apache.org/jira/browse/HADOOP-128?page=comments#action_12374375 ] 

Owen O'Malley commented on HADOOP-128:
--------------------------------------

The read and write block functionality needs to be factored out of the huge if/then/else. I'll open a new bug for that.

> Failure to replicate dfs block kills client
> -------------------------------------------
>
>          Key: HADOOP-128
>          URL: http://issues.apache.org/jira/browse/HADOOP-128
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>     Versions: 0.1.1
>  Environment: ~200 node linux cluster (kernel 2.6, redhat, 2 hyper threaded cpus)
>     Reporter: Owen O'Malley
>     Assignee: Owen O'Malley
>  Attachments: datanode-mirroring.patch, datanode.no-ws-diff
>
> When the datanode gets an exception, which is logged as:
> 060407 155835 13 DataXCeiver
> java.io.EOFException
>         at java.io.DataInputStream.readFully(DataInputStream.java:178)
>         at java.io.DataInputStream.readLong(DataInputStream.java:380)
>         at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:462)
>         at java.lang.Thread.run(Thread.java:595)
> It closes the user's connection to the data node, which causes the client to get an IOException from:
>         at java.io.DataInputStream.readFully(DataInputStream.java:178)
>         at java.io.DataInputStream.readLong(DataInputStream.java:380)
>         at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.internalClose(DFSClient.java:883)
>  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira