You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by 谢良 <xi...@xiaomi.com> on 2012/10/19 06:10:48 UTC

答复: OOM/crashes due to process number limit

what's the exactly OOM error message, is it sth like "OutOfMemoryError: unable to create new native thread" ?
________________________________
发件人: Aiden Bell [aiden449@gmail.com]
发送时间: 2012年10月18日 22:24
收件人: user@hadoop.apache.org
主题: OOM/crashes due to process number limit

Hi All,

Im running quite a basic map/reduce job with 10 or so map tasks. During the task's execution, the
entire stack (and my OS for that matter) start failing due to being unable to fork() new processes.
It seems Hadoop (1.0.3) is creating 700+ threads and exhausting this resource. RAM utilisation is fine however.
This still occurs with ulimit set to unlimited.

Any ideas or advice would be great, it seems very sketchy for a task that doesn't require much grunt.

Cheers!


答复: 答复: OOM/crashes due to process number limit

Posted by 谢良 <xi...@xiaomi.com>.
A threadump should give you some clue on which kind of thread number is not expected,  hope it helpful for u
________________________________
发件人: Aiden Bell [aiden449@gmail.com]
发送时间: 2012年10月19日 19:04
收件人: user@hadoop.apache.org
主题: Re: 答复: OOM/crashes due to process number limit

Yep, and then the entire OS can't fork new processes.

On 19 October 2012 05:10, 谢良 <xi...@xiaomi.com>> wrote:
what's the exactly OOM error message, is it sth like "OutOfMemoryError: unable to create new native thread" ?
________________________________
发件人: Aiden Bell [aiden449@gmail.com<ma...@gmail.com>]
发送时间: 2012年10月18日 22:24
收件人: user@hadoop.apache.org<ma...@hadoop.apache.org>
主题: OOM/crashes due to process number limit

Hi All,

Im running quite a basic map/reduce job with 10 or so map tasks. During the task's execution, the
entire stack (and my OS for that matter) start failing due to being unable to fork() new processes.
It seems Hadoop (1.0.3) is creating 700+ threads and exhausting this resource. RAM utilisation is fine however.
This still occurs with ulimit set to unlimited.

Any ideas or advice would be great, it seems very sketchy for a task that doesn't require much grunt.

Cheers!




--
------------------------------------------------------------------
Never send sensitive or private information via email unless it is encrypted. http://www.gnupg.org

答复: 答复: OOM/crashes due to process number limit

Posted by 谢良 <xi...@xiaomi.com>.
A threadump should give you some clue on which kind of thread number is not expected,  hope it helpful for u
________________________________
发件人: Aiden Bell [aiden449@gmail.com]
发送时间: 2012年10月19日 19:04
收件人: user@hadoop.apache.org
主题: Re: 答复: OOM/crashes due to process number limit

Yep, and then the entire OS can't fork new processes.

On 19 October 2012 05:10, 谢良 <xi...@xiaomi.com>> wrote:
what's the exactly OOM error message, is it sth like "OutOfMemoryError: unable to create new native thread" ?
________________________________
发件人: Aiden Bell [aiden449@gmail.com<ma...@gmail.com>]
发送时间: 2012年10月18日 22:24
收件人: user@hadoop.apache.org<ma...@hadoop.apache.org>
主题: OOM/crashes due to process number limit

Hi All,

Im running quite a basic map/reduce job with 10 or so map tasks. During the task's execution, the
entire stack (and my OS for that matter) start failing due to being unable to fork() new processes.
It seems Hadoop (1.0.3) is creating 700+ threads and exhausting this resource. RAM utilisation is fine however.
This still occurs with ulimit set to unlimited.

Any ideas or advice would be great, it seems very sketchy for a task that doesn't require much grunt.

Cheers!




--
------------------------------------------------------------------
Never send sensitive or private information via email unless it is encrypted. http://www.gnupg.org

答复: 答复: OOM/crashes due to process number limit

Posted by 谢良 <xi...@xiaomi.com>.
A threadump should give you some clue on which kind of thread number is not expected,  hope it helpful for u
________________________________
发件人: Aiden Bell [aiden449@gmail.com]
发送时间: 2012年10月19日 19:04
收件人: user@hadoop.apache.org
主题: Re: 答复: OOM/crashes due to process number limit

Yep, and then the entire OS can't fork new processes.

On 19 October 2012 05:10, 谢良 <xi...@xiaomi.com>> wrote:
what's the exactly OOM error message, is it sth like "OutOfMemoryError: unable to create new native thread" ?
________________________________
发件人: Aiden Bell [aiden449@gmail.com<ma...@gmail.com>]
发送时间: 2012年10月18日 22:24
收件人: user@hadoop.apache.org<ma...@hadoop.apache.org>
主题: OOM/crashes due to process number limit

Hi All,

Im running quite a basic map/reduce job with 10 or so map tasks. During the task's execution, the
entire stack (and my OS for that matter) start failing due to being unable to fork() new processes.
It seems Hadoop (1.0.3) is creating 700+ threads and exhausting this resource. RAM utilisation is fine however.
This still occurs with ulimit set to unlimited.

Any ideas or advice would be great, it seems very sketchy for a task that doesn't require much grunt.

Cheers!




--
------------------------------------------------------------------
Never send sensitive or private information via email unless it is encrypted. http://www.gnupg.org

答复: 答复: OOM/crashes due to process number limit

Posted by 谢良 <xi...@xiaomi.com>.
A threadump should give you some clue on which kind of thread number is not expected,  hope it helpful for u
________________________________
发件人: Aiden Bell [aiden449@gmail.com]
发送时间: 2012年10月19日 19:04
收件人: user@hadoop.apache.org
主题: Re: 答复: OOM/crashes due to process number limit

Yep, and then the entire OS can't fork new processes.

On 19 October 2012 05:10, 谢良 <xi...@xiaomi.com>> wrote:
what's the exactly OOM error message, is it sth like "OutOfMemoryError: unable to create new native thread" ?
________________________________
发件人: Aiden Bell [aiden449@gmail.com<ma...@gmail.com>]
发送时间: 2012年10月18日 22:24
收件人: user@hadoop.apache.org<ma...@hadoop.apache.org>
主题: OOM/crashes due to process number limit

Hi All,

Im running quite a basic map/reduce job with 10 or so map tasks. During the task's execution, the
entire stack (and my OS for that matter) start failing due to being unable to fork() new processes.
It seems Hadoop (1.0.3) is creating 700+ threads and exhausting this resource. RAM utilisation is fine however.
This still occurs with ulimit set to unlimited.

Any ideas or advice would be great, it seems very sketchy for a task that doesn't require much grunt.

Cheers!




--
------------------------------------------------------------------
Never send sensitive or private information via email unless it is encrypted. http://www.gnupg.org

Re: 答复: OOM/crashes due to process number limit

Posted by Aiden Bell <ai...@gmail.com>.
Yep, and then the entire OS can't fork new processes.

On 19 October 2012 05:10, 谢良 <xi...@xiaomi.com> wrote:

>  what's the exactly OOM error message, is it sth like "OutOfMemoryError:
> unable to create new native thread" ?
>  ------------------------------
> *发件人:* Aiden Bell [aiden449@gmail.com]
> *发送时间:* 2012年10月18日 22:24
> *收件人:* user@hadoop.apache.org
> *主题:* OOM/crashes due to process number limit
>
>  Hi All,
>
> Im running quite a basic map/reduce job with 10 or so map tasks. During
> the task's execution, the
> entire stack (and my OS for that matter) start failing due to being unable
> to fork() new processes.
> It seems Hadoop (1.0.3) is creating 700+ threads and exhausting this
> resource. RAM utilisation is fine however.
> This still occurs with ulimit set to unlimited.
>
> Any ideas or advice would be great, it seems very sketchy for a task that
> doesn't require much grunt.
>
> Cheers!
>
>


-- 
------------------------------------------------------------------
Never send sensitive or private information via email unless it is
encrypted. http://www.gnupg.org

Re: 答复: OOM/crashes due to process number limit

Posted by Aiden Bell <ai...@gmail.com>.
Yep, and then the entire OS can't fork new processes.

On 19 October 2012 05:10, 谢良 <xi...@xiaomi.com> wrote:

>  what's the exactly OOM error message, is it sth like "OutOfMemoryError:
> unable to create new native thread" ?
>  ------------------------------
> *发件人:* Aiden Bell [aiden449@gmail.com]
> *发送时间:* 2012年10月18日 22:24
> *收件人:* user@hadoop.apache.org
> *主题:* OOM/crashes due to process number limit
>
>  Hi All,
>
> Im running quite a basic map/reduce job with 10 or so map tasks. During
> the task's execution, the
> entire stack (and my OS for that matter) start failing due to being unable
> to fork() new processes.
> It seems Hadoop (1.0.3) is creating 700+ threads and exhausting this
> resource. RAM utilisation is fine however.
> This still occurs with ulimit set to unlimited.
>
> Any ideas or advice would be great, it seems very sketchy for a task that
> doesn't require much grunt.
>
> Cheers!
>
>


-- 
------------------------------------------------------------------
Never send sensitive or private information via email unless it is
encrypted. http://www.gnupg.org

Re: 答复: OOM/crashes due to process number limit

Posted by Aiden Bell <ai...@gmail.com>.
Yep, and then the entire OS can't fork new processes.

On 19 October 2012 05:10, 谢良 <xi...@xiaomi.com> wrote:

>  what's the exactly OOM error message, is it sth like "OutOfMemoryError:
> unable to create new native thread" ?
>  ------------------------------
> *发件人:* Aiden Bell [aiden449@gmail.com]
> *发送时间:* 2012年10月18日 22:24
> *收件人:* user@hadoop.apache.org
> *主题:* OOM/crashes due to process number limit
>
>  Hi All,
>
> Im running quite a basic map/reduce job with 10 or so map tasks. During
> the task's execution, the
> entire stack (and my OS for that matter) start failing due to being unable
> to fork() new processes.
> It seems Hadoop (1.0.3) is creating 700+ threads and exhausting this
> resource. RAM utilisation is fine however.
> This still occurs with ulimit set to unlimited.
>
> Any ideas or advice would be great, it seems very sketchy for a task that
> doesn't require much grunt.
>
> Cheers!
>
>


-- 
------------------------------------------------------------------
Never send sensitive or private information via email unless it is
encrypted. http://www.gnupg.org

Re: 答复: OOM/crashes due to process number limit

Posted by Aiden Bell <ai...@gmail.com>.
Yep, and then the entire OS can't fork new processes.

On 19 October 2012 05:10, 谢良 <xi...@xiaomi.com> wrote:

>  what's the exactly OOM error message, is it sth like "OutOfMemoryError:
> unable to create new native thread" ?
>  ------------------------------
> *发件人:* Aiden Bell [aiden449@gmail.com]
> *发送时间:* 2012年10月18日 22:24
> *收件人:* user@hadoop.apache.org
> *主题:* OOM/crashes due to process number limit
>
>  Hi All,
>
> Im running quite a basic map/reduce job with 10 or so map tasks. During
> the task's execution, the
> entire stack (and my OS for that matter) start failing due to being unable
> to fork() new processes.
> It seems Hadoop (1.0.3) is creating 700+ threads and exhausting this
> resource. RAM utilisation is fine however.
> This still occurs with ulimit set to unlimited.
>
> Any ideas or advice would be great, it seems very sketchy for a task that
> doesn't require much grunt.
>
> Cheers!
>
>


-- 
------------------------------------------------------------------
Never send sensitive or private information via email unless it is
encrypted. http://www.gnupg.org