You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Zheng Shao (JIRA)" <ji...@apache.org> on 2009/06/12 02:54:07 UTC

[jira] Updated: (HIVE-480) allow option to retry map-reduce tasks

     [ https://issues.apache.org/jira/browse/HIVE-480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Zheng Shao updated HIVE-480:
----------------------------

    Attachment: HIVE-480.1.patch

This patch adds an additional config: "hive.exec.retries.max" (default: 1) to HiveConf and hive-default.xml

> allow option to retry map-reduce tasks
> --------------------------------------
>
>                 Key: HIVE-480
>                 URL: https://issues.apache.org/jira/browse/HIVE-480
>             Project: Hadoop Hive
>          Issue Type: New Feature
>          Components: Query Processor
>            Reporter: Joydeep Sen Sarma
>         Attachments: HIVE-480.1.patch
>
>
> for long running queries with multiple map-reduce jobs - this should help in dealing with any transient cluster failures without having to re-running all the tasks.
> ideally - the entire plan can be serialized out and the actual process of executing the workflow can be left to a pluggable workflow execution engine (since this is a problem that has been solved many times already).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.