You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Kan Tao <ke...@gmail.com> on 2013/12/11 03:59:18 UTC

How to obtain the exception actually failed the job on Mapper or Reducer at runtime?

Hi guys,



Does anyone knows how to ‘capture’ the exception which actually failed the
job running on Mapper or Reducer at runtime? It seems Hadoop is designed to
be fault tolerant that the failed jobs will be automatically rerun for a
certain amount of times and won’t actually expose the real problem unless
you look into the error log? In my use case, I would like to capture the
exception and make different response based on the type of the exception.

Thanks in advance.



Regards,

Ken

Re: How to obtain the exception actually failed the job on Mapper or Reducer at runtime?

Posted by Silvina Caíno Lores <si...@gmail.com>.
Hi,

You can check the userlogs directory where the job and attempt logs are
stored. For each attempt you should have a stderr, stdout and syslog file.
The first two hold the program output for each stream (useful for debug
purposes), while the last contains execution details provided by the
platform.

Hope it helps.
Best,
Silvina
On 11 Dec 2013 03:59, "Kan Tao" <ke...@gmail.com> wrote:

> Hi guys,
>
>
>
> Does anyone knows how to ‘capture’ the exception which actually failed the
> job running on Mapper or Reducer at runtime? It seems Hadoop is designed to
> be fault tolerant that the failed jobs will be automatically rerun for a
> certain amount of times and won’t actually expose the real problem unless
> you look into the error log? In my use case, I would like to capture the
> exception and make different response based on the type of the exception.
>
> Thanks in advance.
>
>
>
> Regards,
>
> Ken
>
>

Re: How to obtain the exception actually failed the job on Mapper or Reducer at runtime?

Posted by Silvina Caíno Lores <si...@gmail.com>.
Hi,

You can check the userlogs directory where the job and attempt logs are
stored. For each attempt you should have a stderr, stdout and syslog file.
The first two hold the program output for each stream (useful for debug
purposes), while the last contains execution details provided by the
platform.

Hope it helps.
Best,
Silvina
On 11 Dec 2013 03:59, "Kan Tao" <ke...@gmail.com> wrote:

> Hi guys,
>
>
>
> Does anyone knows how to ‘capture’ the exception which actually failed the
> job running on Mapper or Reducer at runtime? It seems Hadoop is designed to
> be fault tolerant that the failed jobs will be automatically rerun for a
> certain amount of times and won’t actually expose the real problem unless
> you look into the error log? In my use case, I would like to capture the
> exception and make different response based on the type of the exception.
>
> Thanks in advance.
>
>
>
> Regards,
>
> Ken
>
>

Re: How to obtain the exception actually failed the job on Mapper or Reducer at runtime?

Posted by Silvina Caíno Lores <si...@gmail.com>.
Hi,

You can check the userlogs directory where the job and attempt logs are
stored. For each attempt you should have a stderr, stdout and syslog file.
The first two hold the program output for each stream (useful for debug
purposes), while the last contains execution details provided by the
platform.

Hope it helps.
Best,
Silvina
On 11 Dec 2013 03:59, "Kan Tao" <ke...@gmail.com> wrote:

> Hi guys,
>
>
>
> Does anyone knows how to ‘capture’ the exception which actually failed the
> job running on Mapper or Reducer at runtime? It seems Hadoop is designed to
> be fault tolerant that the failed jobs will be automatically rerun for a
> certain amount of times and won’t actually expose the real problem unless
> you look into the error log? In my use case, I would like to capture the
> exception and make different response based on the type of the exception.
>
> Thanks in advance.
>
>
>
> Regards,
>
> Ken
>
>

Re: How to obtain the exception actually failed the job on Mapper or Reducer at runtime?

Posted by Silvina Caíno Lores <si...@gmail.com>.
Hi,

You can check the userlogs directory where the job and attempt logs are
stored. For each attempt you should have a stderr, stdout and syslog file.
The first two hold the program output for each stream (useful for debug
purposes), while the last contains execution details provided by the
platform.

Hope it helps.
Best,
Silvina
On 11 Dec 2013 03:59, "Kan Tao" <ke...@gmail.com> wrote:

> Hi guys,
>
>
>
> Does anyone knows how to ‘capture’ the exception which actually failed the
> job running on Mapper or Reducer at runtime? It seems Hadoop is designed to
> be fault tolerant that the failed jobs will be automatically rerun for a
> certain amount of times and won’t actually expose the real problem unless
> you look into the error log? In my use case, I would like to capture the
> exception and make different response based on the type of the exception.
>
> Thanks in advance.
>
>
>
> Regards,
>
> Ken
>
>