You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by #YONG YONG CHENG# <aa...@pmail.ntu.edu.sg> on 2010/01/28 10:59:44 UTC

Cleanup Attempt in Map Task

Good Day,
 
Is there any way to control the cleanup attempt of a failed map task without changing the Hadoop platform? I mean doing it in my MapReduce application.
 
I discovered that FileSystem.copyFromLocal() will take a long time sometimes. Is there any other method in the Hadoop API that I can use to swiftly transfer my file to the HDFS?
 
Situation: Each map task in my job executes very fast under 5 secs. But normally, it hangs at the FileSystem.copyFromLocal(), which will take more than 55 secs. As machine timeout is 5 secs and task timeout is 1 min, the task will fail. And subsequent attempt will also fail at the FileSystem.copyFromLocal().
 
Thanks. I welcome any solutions. Feel free.

Re: Cleanup Attempt in Map Task

Posted by Jeff Zhang <zj...@gmail.com>.
One easy way is to increase the timeout by setting mapred.task.timeout in
mapred-site.xml



On Thu, Jan 28, 2010 at 5:59 PM, #YONG YONG CHENG# <
aarnchng@pmail.ntu.edu.sg> wrote:

> Good Day,
>
> Is there any way to control the cleanup attempt of a failed map task
> without changing the Hadoop platform? I mean doing it in my MapReduce
> application.
>
> I discovered that FileSystem.copyFromLocal() will take a long time
> sometimes. Is there any other method in the Hadoop API that I can use to
> swiftly transfer my file to the HDFS?
>
> Situation: Each map task in my job executes very fast under 5 secs. But
> normally, it hangs at the FileSystem.copyFromLocal(), which will take more
> than 55 secs. As machine timeout is 5 secs and task timeout is 1 min, the
> task will fail. And subsequent attempt will also fail at the
> FileSystem.copyFromLocal().
>
> Thanks. I welcome any solutions. Feel free.
>



-- 
Best Regards

Jeff Zhang