You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by "Hiller, Dean (Contractor)" <de...@broadridge.com> on 2011/01/02 22:38:35 UTC

keep.failed.tasks.files what about keep successful ones?

I have a job that says it completed 100% but in reality it failed as the
data conversion has some kind of bug.  I would like to keep the
successful tasks out there temporarily(small development only cluster)
so that I can run the IsolationRunner and debug a single partition to
see what is going on.  Is there a good way to do this?

 

Or do I have to set some kind of config so tasks run inside the
TaskTracker JVM so I can hook the debugger to that instead?

 

Thanks,

Dean


This message and any attachments are intended only for the use of the addressee and
may contain information that is privileged and confidential. If the reader of the 
message is not the intended recipient or an authorized representative of the
intended recipient, you are hereby notified that any dissemination of this
communication is strictly prohibited. If you have received this communication in
error, please notify us immediately by e-mail and delete the message and any
attachments from your system.