You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Leitao Guo (JIRA)" <ji...@apache.org> on 2009/03/12 09:20:50 UTC

[jira] Created: (HADOOP-5474) All reduce tasks should be re-executed when tasktracker with a completed map task failed

All reduce tasks should be re-executed when tasktracker with a completed map task failed
----------------------------------------------------------------------------------------

                 Key: HADOOP-5474
                 URL: https://issues.apache.org/jira/browse/HADOOP-5474
             Project: Hadoop Core
          Issue Type: Bug
          Components: mapred
    Affects Versions: 0.19.0
         Environment: CentOS 5,
hadoop-0.19.0
            Reporter: Leitao Guo
            Priority: Critical


When a tasktracker with a completed map task failed, the map task will be re-exectuted, and all reduce tasks that haven't read the data from that tasktracker should be re-executed. But the reduce task that have read the data from that tasktracker will not be re-executed. 

In this situation, if the outputs of multi map tasks on the same dataset are different, for example outputting a random number, the outputs of maptask and the re-executed maptask will probably are different. Then the re-executed reduce tasks will read the new output of the re-executed maptask, but reduce tasks that have read the data from the failed tasktracker have read the old output. This probably will cause correctness of the result.

A recommended solution is that all reduce tasks should be re-executed if one tasktracker with a completed map task failed.

Any comments? thanks!




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Resolved: (HADOOP-5474) All reduce tasks should be re-executed when tasktracker with a completed map task failed

Posted by "Owen O'Malley (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Owen O'Malley resolved HADOOP-5474.
-----------------------------------

    Resolution: Won't Fix

The cost of this change would be huge. Basically, any node going down or a crc failure in shuffle would cause you to kill all currently running reduces. That is unacceptable. Your application needs to be tolerant of reexecution of tasks. That is a fundamental constraint of map/reduce programming. In order to make your example work, the map could use the hash of the input split as the seed to the random number generator. That way, re-executions will have consistent behavior.

> All reduce tasks should be re-executed when tasktracker with a completed map task failed
> ----------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5474
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5474
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.19.0
>         Environment: CentOS 5,
> hadoop-0.19.0
>            Reporter: Leitao Guo
>            Priority: Critical
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> When a tasktracker with a completed map task failed, the map task will be re-exectuted, and all reduce tasks that haven't read the data from that tasktracker should be re-executed. But the reduce task that have read the data from that tasktracker will not be re-executed. 
> In this situation, if the outputs of multi map tasks on the same dataset are different, for example outputting a random number, the outputs of maptask and the re-executed maptask will probably are different. Then the re-executed reduce tasks will read the new output of the re-executed maptask, but reduce tasks that have read the data from the failed tasktracker have read the old output. This probably will cause correctness of the result.
> A recommended solution is that all reduce tasks should be re-executed if one tasktracker with a completed map task failed.
> Any comments? thanks!

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5474) All reduce tasks should be re-executed when tasktracker with a completed map task failed

Posted by "he yongqiang (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12681610#action_12681610 ] 

he yongqiang commented on HADOOP-5474:
--------------------------------------

Will seperating the job into two jobs resolve your problem? the first job only do the map, and if it is done, the outputs of map tasks are steady.

> All reduce tasks should be re-executed when tasktracker with a completed map task failed
> ----------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5474
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5474
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.19.0
>         Environment: CentOS 5,
> hadoop-0.19.0
>            Reporter: Leitao Guo
>            Priority: Critical
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> When a tasktracker with a completed map task failed, the map task will be re-exectuted, and all reduce tasks that haven't read the data from that tasktracker should be re-executed. But the reduce task that have read the data from that tasktracker will not be re-executed. 
> In this situation, if the outputs of multi map tasks on the same dataset are different, for example outputting a random number, the outputs of maptask and the re-executed maptask will probably are different. Then the re-executed reduce tasks will read the new output of the re-executed maptask, but reduce tasks that have read the data from the failed tasktracker have read the old output. This probably will cause correctness of the result.
> A recommended solution is that all reduce tasks should be re-executed if one tasktracker with a completed map task failed.
> Any comments? thanks!

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-5474) All reduce tasks should be re-executed when tasktracker with a completed map task failed

Posted by "Leitao Guo (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Leitao Guo updated HADOOP-5474:
-------------------------------

    Remaining Estimate: 96h  (was: 48h)
     Original Estimate: 96h  (was: 48h)

> All reduce tasks should be re-executed when tasktracker with a completed map task failed
> ----------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5474
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5474
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.19.0
>         Environment: CentOS 5,
> hadoop-0.19.0
>            Reporter: Leitao Guo
>            Priority: Critical
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> When a tasktracker with a completed map task failed, the map task will be re-exectuted, and all reduce tasks that haven't read the data from that tasktracker should be re-executed. But the reduce task that have read the data from that tasktracker will not be re-executed. 
> In this situation, if the outputs of multi map tasks on the same dataset are different, for example outputting a random number, the outputs of maptask and the re-executed maptask will probably are different. Then the re-executed reduce tasks will read the new output of the re-executed maptask, but reduce tasks that have read the data from the failed tasktracker have read the old output. This probably will cause correctness of the result.
> A recommended solution is that all reduce tasks should be re-executed if one tasktracker with a completed map task failed.
> Any comments? thanks!

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5474) All reduce tasks should be re-executed when tasktracker with a completed map task failed

Posted by "Leitao Guo (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12681607#action_12681607 ] 

Leitao Guo commented on HADOOP-5474:
------------------------------------

I don't agree with you that the application should be tolerant to this situation. But the cost of re-execution of all reduce tasks is very high, do you have any suggestions to solve this issue?

> All reduce tasks should be re-executed when tasktracker with a completed map task failed
> ----------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5474
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5474
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.19.0
>         Environment: CentOS 5,
> hadoop-0.19.0
>            Reporter: Leitao Guo
>            Priority: Critical
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> When a tasktracker with a completed map task failed, the map task will be re-exectuted, and all reduce tasks that haven't read the data from that tasktracker should be re-executed. But the reduce task that have read the data from that tasktracker will not be re-executed. 
> In this situation, if the outputs of multi map tasks on the same dataset are different, for example outputting a random number, the outputs of maptask and the re-executed maptask will probably are different. Then the re-executed reduce tasks will read the new output of the re-executed maptask, but reduce tasks that have read the data from the failed tasktracker have read the old output. This probably will cause correctness of the result.
> A recommended solution is that all reduce tasks should be re-executed if one tasktracker with a completed map task failed.
> Any comments? thanks!

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5474) All reduce tasks should be re-executed when tasktracker with a completed map task failed

Posted by "Devaraj Das (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12681219#action_12681219 ] 

Devaraj Das commented on HADOOP-5474:
-------------------------------------

bq. In this situation, if the outputs of multi map tasks on the same dataset are different, for example outputting a random number, the outputs of maptask and the re-executed maptask will probably are different. Then the re-executed reduce tasks will read the new output of the re-executed maptask, but reduce tasks that have read the data from the failed tasktracker have read the old output. This probably will cause correctness of the result.

I think your application should be tolerant to this happening and be written assuming that maps/reduces could fail or get killed, etc. We really don't want to do what you suggest.

> All reduce tasks should be re-executed when tasktracker with a completed map task failed
> ----------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5474
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5474
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.19.0
>         Environment: CentOS 5,
> hadoop-0.19.0
>            Reporter: Leitao Guo
>            Priority: Critical
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> When a tasktracker with a completed map task failed, the map task will be re-exectuted, and all reduce tasks that haven't read the data from that tasktracker should be re-executed. But the reduce task that have read the data from that tasktracker will not be re-executed. 
> In this situation, if the outputs of multi map tasks on the same dataset are different, for example outputting a random number, the outputs of maptask and the re-executed maptask will probably are different. Then the re-executed reduce tasks will read the new output of the re-executed maptask, but reduce tasks that have read the data from the failed tasktracker have read the old output. This probably will cause correctness of the result.
> A recommended solution is that all reduce tasks should be re-executed if one tasktracker with a completed map task failed.
> Any comments? thanks!

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-5474) All reduce tasks should be re-executed when tasktracker with a completed map task failed

Posted by "Amareshwari Sriramadasu (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12681215#action_12681215 ] 

Amareshwari Sriramadasu commented on HADOOP-5474:
-------------------------------------------------

bq. When a tasktracker with a completed map task failed, the map task will be re-exectuted, and all reduce tasks that haven't read the data from that tasktracker should be re-executed. 
Reduce tasks are not re-executed, they will fail in fetching the map output and retry the fetch, will succeed once the reexecuted-map succeeds.

> All reduce tasks should be re-executed when tasktracker with a completed map task failed
> ----------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5474
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5474
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.19.0
>         Environment: CentOS 5,
> hadoop-0.19.0
>            Reporter: Leitao Guo
>            Priority: Critical
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> When a tasktracker with a completed map task failed, the map task will be re-exectuted, and all reduce tasks that haven't read the data from that tasktracker should be re-executed. But the reduce task that have read the data from that tasktracker will not be re-executed. 
> In this situation, if the outputs of multi map tasks on the same dataset are different, for example outputting a random number, the outputs of maptask and the re-executed maptask will probably are different. Then the re-executed reduce tasks will read the new output of the re-executed maptask, but reduce tasks that have read the data from the failed tasktracker have read the old output. This probably will cause correctness of the result.
> A recommended solution is that all reduce tasks should be re-executed if one tasktracker with a completed map task failed.
> Any comments? thanks!

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.