You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by pr...@nokia.com on 2011/02/25 17:00:14 UTC

Catching mapred exceptions on the client

Hello all,
I have few mapreduce jobs that I am calling from a java driver. The problem I am facing is that when there is an exception in mapred job, the exception is not propogated to the client so even if first job failed, its going to second job and so on. Is there an another way of catching exceptions from mapred jobs on the client side?

I am using hadoop-0.20.2.

My Example is:
Driver {
         try {
        Call MapredJob1;
        Call MapredJob2;
        ..
        ..
        }catch(Exception e) {
                throw new exception;
        }
}

When MapredJob1 throws ClassNotFoundException, MapredJob2 and others are still executing.

Any insight into it is appreciated.

Praveen


Re: Catching mapred exceptions on the client

Posted by Harsh J <qw...@gmail.com>.
Hello,

On Sat, Feb 26, 2011 at 12:24 AM,  <pr...@nokia.com> wrote:
> James,
> Thanks for the response. I am using waitForCompletion (job.waitForCompletion(true);) for each job. So the jobs are definitely executed sequentially.

There's a Job.isSuccessful() call you could use, perhaps - after
waitForCompletion(true) returns back.

-- 
Harsh J
www.harshj.com

RE: Catching mapred exceptions on the client

Posted by pr...@nokia.com.
James,
Thanks for the response. I am using waitForCompletion (job.waitForCompletion(true);) for each job. So the jobs are definitely executed sequentially.

Praveen

-----Original Message-----
From: ext James Seigel [mailto:james@tynt.com] 
Sent: Friday, February 25, 2011 11:15 AM
To: common-user@hadoop.apache.org
Subject: Re: Catching mapred exceptions on the client

Hello,

It is hard to give advice without the specific code.  However, if you don't have your job submission set up to wait for completion then it might be launching all your jobs at the same time.

Check to see how your jobs are being submitted.

Sorry, I can't be more helpful.

James


On 2011-02-25, at 9:00 AM, <pr...@nokia.com> wrote:

> Hello all,
> I have few mapreduce jobs that I am calling from a java driver. The problem I am facing is that when there is an exception in mapred job, the exception is not propogated to the client so even if first job failed, its going to second job and so on. Is there an another way of catching exceptions from mapred jobs on the client side?
> 
> I am using hadoop-0.20.2.
> 
> My Example is:
> Driver {
>         try {
>        Call MapredJob1;
>        Call MapredJob2;
>        ..
>        ..
>        }catch(Exception e) {
>                throw new exception;
>        }
> }
> 
> When MapredJob1 throws ClassNotFoundException, MapredJob2 and others are still executing.
> 
> Any insight into it is appreciated.
> 
> Praveen
> 


Re: Catching mapred exceptions on the client

Posted by James Seigel <ja...@tynt.com>.
Hello,

It is hard to give advice without the specific code.  However, if you don’t have your job submission set up to wait for completion then it might be launching all your jobs at the same time.

Check to see how your jobs are being submitted.

Sorry, I can’t be more helpful.

James


On 2011-02-25, at 9:00 AM, <pr...@nokia.com> wrote:

> Hello all,
> I have few mapreduce jobs that I am calling from a java driver. The problem I am facing is that when there is an exception in mapred job, the exception is not propogated to the client so even if first job failed, its going to second job and so on. Is there an another way of catching exceptions from mapred jobs on the client side?
> 
> I am using hadoop-0.20.2.
> 
> My Example is:
> Driver {
>         try {
>        Call MapredJob1;
>        Call MapredJob2;
>        ..
>        ..
>        }catch(Exception e) {
>                throw new exception;
>        }
> }
> 
> When MapredJob1 throws ClassNotFoundException, MapredJob2 and others are still executing.
> 
> Any insight into it is appreciated.
> 
> Praveen
>