You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@sqoop.apache.org by Shibu Thomas <sh...@microsoft.com> on 2012/03/30 15:31:27 UTC

Sqoop Action error in Oozie

Hi All,



We are executing Sqoop action in parallel using fork in Oozie.

In the Sqoop actions we are executing  the below command.



import --driver com.mysql.jdbc.Driver --connect 'jdbc:mysql://127.0.0.1/bedrock' --username root --password Pass1234 --table employee --target-dir /user/cloudera/employee --split-by 'Id' -m 1



We have just 3 records in the mysql table.



This workflow run for a couple of hours and even though the import is successful we get the error in Oozie as below.

Oozie also reports that the jobs are killed/errored.



2012-03-30 07:18:17,513 INFO org.apache.hadoop.mapred.JobClient: map 0% reduce 0%

2012-03-30 08:55:04,288 WARN org.apache.hadoop.mapred.Task: Parent died. Exiting attempt_201203210951_0155_m_000000_0

2012-03-30 08:55:04,292 INFO org.apache.hadoop.mapred.Task: Communication exception: java.lang.SecurityException: Intercepted System.exit(66)



Thanks



Shibu Thomas

MSCIS-IS

Office :  +91 (40) 669 32660

Mobile: +91 95811 51116

Re: Sqoop Action error in Oozie

Posted by Kathleen Ting <ka...@cloudera.com>.
Shibu - are you using FairScheduler? If so and since you mention that
the Sqoop import command is successful, you could be hitting your per
user job limit.

Whenever Oozie launches a job, it requires two job submissions (if not
more) - one being the monitor+launcher, and the subsequent ones being
the ones that do the real logic work. The launcher job is something
that will launch the remaining jobs, and hence sticks around until
they have all ended - taking up one running job slot for the whole
lifetime of the Oozie job.

For example, with a per user job limit of 3, if you were to run 3
Oozie jobs, the 3 slots would be filled with launchers first. These
would submit their real jobs next, and those would end up being in a
queue - thereby forming a resource deadlock.

The solution is to channel Oozie launcher hadoop jobs into a dedicated
launcher pool. This pool can have a running job limit too but won't
cause a deadlock because the pools are now separated.

To do this, you need to pass the config property:
"oozie.launcher.<property that specifies your pool>" via WF
<configuration> elements or <job-xml> files to point to the separate
pool.

Regards, Kathleen


On Fri, Mar 30, 2012 at 7:10 AM, Jarek Jarcec Cecho <ja...@apache.org> wrote:
>
> Hi,
> can you share your workflow.xml file with complete log of sqoop execution (it' can be retrieved from "launcher" job)? Please add parameter "--verbose" to get more detailed log.
>
> Also can you share your sqoop and oozie version?
>
> Jarcec
>
> On Mar 30, 2012, at 3:31 PM, Shibu Thomas wrote:
>
> > Hi All,
> >
> > We are executing Sqoop action in parallel using fork in Oozie.
> > In the Sqoop actions we are executing  the below command.
> >
> > import --driver com.mysql.jdbc.Driver --connect 'jdbc:mysql://127.0.0.1/bedrock' --username root --password Pass1234 --table employee --target-dir /user/cloudera/employee --split-by 'Id' -m 1
> >
> > We have just 3 records in the mysql table.
> >
> > This workflow run for a couple of hours and even though the import is successful we get the error in Oozie as below.
> > Oozie also reports that the jobs are killed/errored.
> >
> > 2012-03-30 07:18:17,513 INFO org.apache.hadoop.mapred.JobClient: map 0% reduce 0%
> > 2012-03-30 08:55:04,288 WARN org.apache.hadoop.mapred.Task: Parent died. Exiting attempt_201203210951_0155_m_000000_0
> > 2012-03-30 08:55:04,292 INFO org.apache.hadoop.mapred.Task: Communication exception: java.lang.SecurityException: Intercepted System.exit(66)
> >
> > Thanks
> >
> > Shibu Thomas
> > MSCIS-IS
> > Office :  +91 (40) 669 32660
> > Mobile: +91 95811 51116
>

Re: Sqoop Action error in Oozie

Posted by Jarek Jarcec Cecho <ja...@apache.org>.
Hi,
can you share your workflow.xml file with complete log of sqoop execution (it' can be retrieved from "launcher" job)? Please add parameter "--verbose" to get more detailed log.

Also can you share your sqoop and oozie version?

Jarcec

On Mar 30, 2012, at 3:31 PM, Shibu Thomas wrote:

> Hi All,
>  
> We are executing Sqoop action in parallel using fork in Oozie.
> In the Sqoop actions we are executing  the below command.
>  
> import --driver com.mysql.jdbc.Driver --connect 'jdbc:mysql://127.0.0.1/bedrock' --username root --password Pass1234 --table employee --target-dir /user/cloudera/employee --split-by 'Id' -m 1
>  
> We have just 3 records in the mysql table.
>  
> This workflow run for a couple of hours and even though the import is successful we get the error in Oozie as below.
> Oozie also reports that the jobs are killed/errored.
>  
> 2012-03-30 07:18:17,513 INFO org.apache.hadoop.mapred.JobClient: map 0% reduce 0%
> 2012-03-30 08:55:04,288 WARN org.apache.hadoop.mapred.Task: Parent died. Exiting attempt_201203210951_0155_m_000000_0
> 2012-03-30 08:55:04,292 INFO org.apache.hadoop.mapred.Task: Communication exception: java.lang.SecurityException: Intercepted System.exit(66)
>  
> Thanks
>  
> Shibu Thomas
> MSCIS-IS
> Office :  +91 (40) 669 32660
> Mobile: +91 95811 51116