You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Zheng Shao (JIRA)" <ji...@apache.org> on 2009/06/11 22:04:07 UTC
[jira] Commented: (HIVE-480) allow option to retry map-reduce tasks
[ https://issues.apache.org/jira/browse/HIVE-480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12718623#action_12718623 ]
Zheng Shao commented on HIVE-480:
---------------------------------
As a side note, the conf in hadoop is "mapred.max.tracker.failures" which controls the number of maximum permitted failures for each task.
> allow option to retry map-reduce tasks
> --------------------------------------
>
> Key: HIVE-480
> URL: https://issues.apache.org/jira/browse/HIVE-480
> Project: Hadoop Hive
> Issue Type: New Feature
> Components: Query Processor
> Reporter: Joydeep Sen Sarma
>
> for long running queries with multiple map-reduce jobs - this should help in dealing with any transient cluster failures without having to re-running all the tasks.
> ideally - the entire plan can be serialized out and the actual process of executing the workflow can be left to a pluggable workflow execution engine (since this is a problem that has been solved many times already).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.