You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "eric baldeschwieler (JIRA)" <ji...@apache.org> on 2006/03/17 05:27:11 UTC

[jira] Created: (HADOOP-91) snapshot a map-reduce to DFS ... and restore

snapshot a map-reduce to DFS ... and restore
--------------------------------------------

         Key: HADOOP-91
         URL: http://issues.apache.org/jira/browse/HADOOP-91
     Project: Hadoop
        Type: New Feature
  Components: mapred  
    Reporter: eric baldeschwieler
    Priority: Minor


The idea is to be able to issue a command to the job tracker that
will halt a map-reduce and archive it to a directory in such a way
that it can later be restarted.

We could also set a mode that would cause this to happen to a job
when it fails.  This would allow one to debug and restart a failing
job reasonably, which might be important, for long running jobs.  It
has certainly been important in similar systems I've seen before.  One 
could restart with a new jar or work bench a single failing map or reduce.


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-91) snapshot a map-reduce to DFS ... and restore

Posted by "eric baldeschwieler (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-91?page=comments#action_12371131 ] 

eric baldeschwieler commented on HADOOP-91:
-------------------------------------------

Good points.  Adding an option to specify replication level would be a good addition.  Some of this data will be automatically regeneratable, only meta-data may really need high level replication.

> snapshot a map-reduce to DFS ... and restore
> --------------------------------------------
>
>          Key: HADOOP-91
>          URL: http://issues.apache.org/jira/browse/HADOOP-91
>      Project: Hadoop
>         Type: New Feature
>   Components: mapred
>     Reporter: eric baldeschwieler
>     Priority: Minor

>
> The idea is to be able to issue a command to the job tracker that
> will halt a map-reduce and archive it to a directory in such a way
> that it can later be restarted.
> We could also set a mode that would cause this to happen to a job
> when it fails.  This would allow one to debug and restart a failing
> job reasonably, which might be important, for long running jobs.  It
> has certainly been important in similar systems I've seen before.  One 
> could restart with a new jar or work bench a single failing map or reduce.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-91) snapshot a map-reduce to DFS ... and restore

Posted by "Bryan Pendleton (JIRA)" <ji...@apache.org>.
    [ http://issues.apache.org/jira/browse/HADOOP-91?page=comments#action_12371127 ] 

Bryan Pendleton commented on HADOOP-91:
---------------------------------------

This would be very useful, although, it should be noted that snapshotting a job to DFS means that it will take as much extra space to store as the replication (well, replication+1) level. If you're running jobs that produce large intermediate results, then attempting to checkpoint with, say, the default 3x replication, requires 4 times as much space as the job would, otherwise. For no-side-effect jobs, perhaps the default should be to checkpoint but with replication of 1 (assuming per-file replication gets added to DFS), and just let lost blocks turn into lost tasks that just get re-run. Hadoop should minimize space usage wherever possible, if it's really going to scale up to huge workloads.

> snapshot a map-reduce to DFS ... and restore
> --------------------------------------------
>
>          Key: HADOOP-91
>          URL: http://issues.apache.org/jira/browse/HADOOP-91
>      Project: Hadoop
>         Type: New Feature
>   Components: mapred
>     Reporter: eric baldeschwieler
>     Priority: Minor

>
> The idea is to be able to issue a command to the job tracker that
> will halt a map-reduce and archive it to a directory in such a way
> that it can later be restarted.
> We could also set a mode that would cause this to happen to a job
> when it fails.  This would allow one to debug and restart a failing
> job reasonably, which might be important, for long running jobs.  It
> has certainly been important in similar systems I've seen before.  One 
> could restart with a new jar or work bench a single failing map or reduce.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira