You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Raghu Angadi (JIRA)" <ji...@apache.org> on 2008/04/11 04:20:04 UTC

[jira] Created: (HADOOP-3234) Write pipeline does not recover from first node failure.

Write pipeline does not recover from first node failure.
--------------------------------------------------------

                 Key: HADOOP-3234
                 URL: https://issues.apache.org/jira/browse/HADOOP-3234
             Project: Hadoop Core
          Issue Type: Bug
    Affects Versions: 0.16.0
            Reporter: Raghu Angadi
            Priority: Blocker



While investigating HADOOP-3132, we had a misconfiguration that resulted in client writing to first datanode in the pipeline with 15 second write timeout. As a result, client breaks the pipeline marking the first datanode (DN1) as the bad node. It then restarts the next pipeline pipeline with the rest of the of the datanodes. But the next (second) datanode was stuck waiting waiting for the the earlier block-write to complete. So the client repeats this procedure until it runs out the datanodes and fails the write.

I think this should be a blocker either for 0.16 or 0.17.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3234) Write pipeline does not recover from first node failure.

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12587830#action_12587830 ] 

Raghu Angadi commented on HADOOP-3234:
--------------------------------------

I did not get stack traces but I think this is what happens on the second datanode during second attempt :

- It receives the request for the block with 'isRecovery' set to true. 
- In side {{FSDataset.writeToBlock()}} it interrupts the main receive thread for the thread to exit.
- The main receive thread from the first attempt wait for {{responder}} thread to exit, but it does not iterrupt it.

The fix could interrupt {{responder}} inside the main receiver thread.

> Write pipeline does not recover from first node failure.
> --------------------------------------------------------
>
>                 Key: HADOOP-3234
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3234
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Priority: Blocker
>
> While investigating HADOOP-3132, we had a misconfiguration that resulted in client writing to first datanode in the pipeline with 15 second write timeout. As a result, client breaks the pipeline marking the first datanode (DN1) as the bad node. It then restarts the next pipeline with the rest of the of the datanodes. But the next (second) datanode was stuck waiting for the the earlier block-write to complete. So the client repeats this procedure until it runs out the datanodes and client write fails.
> I think this should be a blocker either for 0.16 or 0.17.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Issue Comment Edited: (HADOOP-3234) Write pipeline does not recover from first node failure.

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12587830#action_12587830 ] 

rangadi edited comment on HADOOP-3234 at 4/10/08 7:30 PM:
---------------------------------------------------------------

I did not get stack traces but I think this is what happens on the second datanode during second attempt :

- It receives the request for the block with 'isRecovery' set to true. 
- In side {{FSDataset.writeToBlock()}} it interrupts the main receive thread and waits for the thread to exit.
- The main receive thread from the first attempt waits for {{responder}} thread to exit, but it does not interrupt it.

One fix could be to interrupt {{responder}} inside the main receiver thread.

      was (Author: rangadi):
    I did not get stack traces but I think this is what happens on the second datanode during second attempt :

- It receives the request for the block with 'isRecovery' set to true. 
- In side {{FSDataset.writeToBlock()}} it interrupts the main receive thread for the thread to exit.
- The main receive thread from the first attempt wait for {{responder}} thread to exit, but it does not iterrupt it.

The fix could interrupt {{responder}} inside the main receiver thread.
  
> Write pipeline does not recover from first node failure.
> --------------------------------------------------------
>
>                 Key: HADOOP-3234
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3234
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Priority: Blocker
>
> While investigating HADOOP-3132, we had a misconfiguration that resulted in client writing to first datanode in the pipeline with 15 second write timeout. As a result, client breaks the pipeline marking the first datanode (DN1) as the bad node. It then restarts the next pipeline with the rest of the of the datanodes. But the next (second) datanode was stuck waiting for the the earlier block-write to complete. So the client repeats this procedure until it runs out the datanodes and client write fails.
> I think this should be a blocker either for 0.16 or 0.17.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3234) Write pipeline does not recover from first node failure sometimes.

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Raghu Angadi updated HADOOP-3234:
---------------------------------

    Priority: Major  (was: Minor)

> Write pipeline does not recover from first node failure sometimes.
> ------------------------------------------------------------------
>
>                 Key: HADOOP-3234
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3234
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>
> While investigating HADOOP-3132, we had a misconfiguration that resulted in client writing to first datanode in the pipeline with 15 second write timeout. As a result, client breaks the pipeline marking the first datanode (DN1) as the bad node. It then restarts the next pipeline with the rest of the of the datanodes. But the next (second) datanode was stuck waiting for the the earlier block-write to complete. So the client repeats this procedure until it runs out the datanodes and client write fails.
> I think this should be a blocker either for 0.16 or 0.17.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3234) Write pipeline does not recover from first node failure sometimes.

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Raghu Angadi updated HADOOP-3234:
---------------------------------

         Priority: Minor  (was: Blocker)
    Fix Version/s:     (was: 0.17.0)

Mostly this can be closed.. unless there is a way around blocked writes. We could try closing the socket before interrupting the thread.. but not sure if close will block until write is done.

> Write pipeline does not recover from first node failure sometimes.
> ------------------------------------------------------------------
>
>                 Key: HADOOP-3234
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3234
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>            Priority: Minor
>
> While investigating HADOOP-3132, we had a misconfiguration that resulted in client writing to first datanode in the pipeline with 15 second write timeout. As a result, client breaks the pipeline marking the first datanode (DN1) as the bad node. It then restarts the next pipeline with the rest of the of the datanodes. But the next (second) datanode was stuck waiting for the the earlier block-write to complete. So the client repeats this procedure until it runs out the datanodes and client write fails.
> I think this should be a blocker either for 0.16 or 0.17.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3234) Write pipeline does not recover from first node failure sometimes.

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Raghu Angadi updated HADOOP-3234:
---------------------------------

    Summary: Write pipeline does not recover from first node failure sometimes.  (was: Write pipeline does not recover from first node failure.)

> Write pipeline does not recover from first node failure sometimes.
> ------------------------------------------------------------------
>
>                 Key: HADOOP-3234
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3234
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Priority: Blocker
>
> While investigating HADOOP-3132, we had a misconfiguration that resulted in client writing to first datanode in the pipeline with 15 second write timeout. As a result, client breaks the pipeline marking the first datanode (DN1) as the bad node. It then restarts the next pipeline with the rest of the of the datanodes. But the next (second) datanode was stuck waiting for the the earlier block-write to complete. So the client repeats this procedure until it runs out the datanodes and client write fails.
> I think this should be a blocker either for 0.16 or 0.17.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Assigned: (HADOOP-3234) Write pipeline does not recover from first node failure sometimes.

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Raghu Angadi reassigned HADOOP-3234:
------------------------------------

    Assignee: Raghu Angadi

> Write pipeline does not recover from first node failure sometimes.
> ------------------------------------------------------------------
>
>                 Key: HADOOP-3234
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3234
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.17.0
>
>
> While investigating HADOOP-3132, we had a misconfiguration that resulted in client writing to first datanode in the pipeline with 15 second write timeout. As a result, client breaks the pipeline marking the first datanode (DN1) as the bad node. It then restarts the next pipeline with the rest of the of the datanodes. But the next (second) datanode was stuck waiting for the the earlier block-write to complete. So the client repeats this procedure until it runs out the datanodes and client write fails.
> I think this should be a blocker either for 0.16 or 0.17.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3234) Write pipeline does not recover from first node failure sometimes.

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Raghu Angadi updated HADOOP-3234:
---------------------------------

    Fix Version/s: 0.17.0

> Write pipeline does not recover from first node failure sometimes.
> ------------------------------------------------------------------
>
>                 Key: HADOOP-3234
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3234
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.17.0
>
>
> While investigating HADOOP-3132, we had a misconfiguration that resulted in client writing to first datanode in the pipeline with 15 second write timeout. As a result, client breaks the pipeline marking the first datanode (DN1) as the bad node. It then restarts the next pipeline with the rest of the of the datanodes. But the next (second) datanode was stuck waiting for the the earlier block-write to complete. So the client repeats this procedure until it runs out the datanodes and client write fails.
> I think this should be a blocker either for 0.16 or 0.17.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Resolved: (HADOOP-3234) Write pipeline does not recover from first node failure sometimes.

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Raghu Angadi resolved HADOOP-3234.
----------------------------------

    Resolution: Won't Fix

> Write pipeline does not recover from first node failure sometimes.
> ------------------------------------------------------------------
>
>                 Key: HADOOP-3234
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3234
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>
> While investigating HADOOP-3132, we had a misconfiguration that resulted in client writing to first datanode in the pipeline with 15 second write timeout. As a result, client breaks the pipeline marking the first datanode (DN1) as the bad node. It then restarts the next pipeline with the rest of the of the datanodes. But the next (second) datanode was stuck waiting for the the earlier block-write to complete. So the client repeats this procedure until it runs out the datanodes and client write fails.
> I think this should be a blocker either for 0.16 or 0.17.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3234) Write pipeline does not recover from first node failure.

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Raghu Angadi updated HADOOP-3234:
---------------------------------

    Component/s: dfs

> Write pipeline does not recover from first node failure.
> --------------------------------------------------------
>
>                 Key: HADOOP-3234
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3234
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Priority: Blocker
>
> While investigating HADOOP-3132, we had a misconfiguration that resulted in client writing to first datanode in the pipeline with 15 second write timeout. As a result, client breaks the pipeline marking the first datanode (DN1) as the bad node. It then restarts the next pipeline with the rest of the of the datanodes. But the next (second) datanode was stuck waiting for the the earlier block-write to complete. So the client repeats this procedure until it runs out the datanodes and client write fails.
> I think this should be a blocker either for 0.16 or 0.17.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Issue Comment Edited: (HADOOP-3234) Write pipeline does not recover from first node failure sometimes.

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12588135#action_12588135 ] 

rangadi edited comment on HADOOP-3234 at 4/11/08 2:33 PM:
---------------------------------------------------------------

This is not an issue while using non-blocking I/O. Looks like read and write using regular sockets is not interruptible (really?). So this will be a very rare problem when  HADOOP-3124 is committed and "dfs.datanode.socket.write.timeout" is set to 0 and something like HADOOP-3132 happens. On 16, it not much of an issue since there is no write timeout at all.

      was (Author: rangadi):
    This is not an issue with non-blocking I/O. Looks like read and write using regular sockets is not interruptible (really?). So this will be a very rare problem when  HADOOP-3124 is committed and "dfs.datanode.socket.write.timeout" is set to 0 and something like HADOOP-3132 happens. On 16, it not much of an issue since there is no write timeout at all.
  
> Write pipeline does not recover from first node failure sometimes.
> ------------------------------------------------------------------
>
>                 Key: HADOOP-3234
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3234
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.17.0
>
>
> While investigating HADOOP-3132, we had a misconfiguration that resulted in client writing to first datanode in the pipeline with 15 second write timeout. As a result, client breaks the pipeline marking the first datanode (DN1) as the bad node. It then restarts the next pipeline with the rest of the of the datanodes. But the next (second) datanode was stuck waiting for the the earlier block-write to complete. So the client repeats this procedure until it runs out the datanodes and client write fails.
> I think this should be a blocker either for 0.16 or 0.17.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-3234) Write pipeline does not recover from first node failure.

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Raghu Angadi updated HADOOP-3234:
---------------------------------

    Description: 
While investigating HADOOP-3132, we had a misconfiguration that resulted in client writing to first datanode in the pipeline with 15 second write timeout. As a result, client breaks the pipeline marking the first datanode (DN1) as the bad node. It then restarts the next pipeline with the rest of the of the datanodes. But the next (second) datanode was stuck waiting for the the earlier block-write to complete. So the client repeats this procedure until it runs out the datanodes and client write fails.

I think this should be a blocker either for 0.16 or 0.17.

  was:

While investigating HADOOP-3132, we had a misconfiguration that resulted in client writing to first datanode in the pipeline with 15 second write timeout. As a result, client breaks the pipeline marking the first datanode (DN1) as the bad node. It then restarts the next pipeline pipeline with the rest of the of the datanodes. But the next (second) datanode was stuck waiting waiting for the the earlier block-write to complete. So the client repeats this procedure until it runs out the datanodes and fails the write.

I think this should be a blocker either for 0.16 or 0.17.


> Write pipeline does not recover from first node failure.
> --------------------------------------------------------
>
>                 Key: HADOOP-3234
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3234
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Priority: Blocker
>
> While investigating HADOOP-3132, we had a misconfiguration that resulted in client writing to first datanode in the pipeline with 15 second write timeout. As a result, client breaks the pipeline marking the first datanode (DN1) as the bad node. It then restarts the next pipeline with the rest of the of the datanodes. But the next (second) datanode was stuck waiting for the the earlier block-write to complete. So the client repeats this procedure until it runs out the datanodes and client write fails.
> I think this should be a blocker either for 0.16 or 0.17.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-3234) Write pipeline does not recover from first node failure sometimes.

Posted by "Raghu Angadi (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12588135#action_12588135 ] 

Raghu Angadi commented on HADOOP-3234:
--------------------------------------

This is not an issue with non-blocking I/O. Looks like read and write using regular sockets is not interruptible (really?). So this will be a very rare problem when  HADOOP-3124 is committed and "dfs.datanode.socket.write.timeout" is set to 0 and something like HADOOP-3132 happens. On 16, it not much of an issue since there is no write timeout at all.

> Write pipeline does not recover from first node failure sometimes.
> ------------------------------------------------------------------
>
>                 Key: HADOOP-3234
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3234
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.17.0
>
>
> While investigating HADOOP-3132, we had a misconfiguration that resulted in client writing to first datanode in the pipeline with 15 second write timeout. As a result, client breaks the pipeline marking the first datanode (DN1) as the bad node. It then restarts the next pipeline with the rest of the of the datanodes. But the next (second) datanode was stuck waiting for the the earlier block-write to complete. So the client repeats this procedure until it runs out the datanodes and client write fails.
> I think this should be a blocker either for 0.16 or 0.17.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.