You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by john li <li...@gmail.com> on 2010/01/29 07:52:29 UTC

always have killed or failed task in job when running multi jobs concurrently

when hadoop running multi jobs concurrently, that is when hadoop is busy,
always have killed tasks in some jobs, although the jobs success finally.

anybody tell me why?

-- 
Regards
Junyong

Re: always have killed or failed task in job when running multi jobs concurrently

Posted by Wang Xu <gn...@gmail.com>.
On Fri, Jan 29, 2010 at 2:52 PM, john li <li...@gmail.com> wrote:
> when hadoop running multi jobs concurrently, that is when hadoop is busy,
> always have killed tasks in some jobs, although the jobs success finally.
>
> anybody tell me why?

if only "killed", don't mind it. JobTracker schedules idle
TaskTrackers to complete some slow work. i.e. some slow tasks will run
multiple times, only first one is success/finished, and others will be
killed.

-- 
Wang Xu
Jonathan Swift  - "May you live every day of your life." -
http://www.brainyquote.com/quotes/authors/j/jonathan_swift.html

Re: always have killed or failed task in job when running multi jobs concurrently

Posted by Rekha Joshi <re...@yahoo-inc.com>.
You can find out the reason from the JT logs (eg: memory/timeout restrictions) and adjust the timeout - mapred.task.timeout or the memory parameters accordingly.Refer http://hadoop.apache.org/common/docs/r0.20.0/cluster_setup.html
Cheers,
/R

On 1/29/10 12:22 PM, "john li" <li...@gmail.com> wrote:

when hadoop running multi jobs concurrently, that is when hadoop is busy,
always have killed tasks in some jobs, although the jobs success finally.

anybody tell me why?

--
Regards
Junyong