You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Rekha Joshi <re...@yahoo-inc.com> on 2010/01/29 08:28:06 UTC
Re: always have killed or failed task in job when running multi
jobs concurrently
You can find out the reason from the JT logs (eg: memory/timeout restrictions) and adjust the timeout - mapred.task.timeout or the memory parameters accordingly.Refer http://hadoop.apache.org/common/docs/r0.20.0/cluster_setup.html
Cheers,
/R
On 1/29/10 12:22 PM, "john li" <li...@gmail.com> wrote:
when hadoop running multi jobs concurrently, that is when hadoop is busy,
always have killed tasks in some jobs, although the jobs success finally.
anybody tell me why?
--
Regards
Junyong