You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@sqoop.apache.org by yypvsxf19870706 <yy...@gmail.com> on 2013/05/23 14:02:11 UTC

Import job failed by chance

Hi users 

    It is weird that my import job failed sometimes . Then I have to start the import job manually , and it succeed again.
     
    Is there anyways to ensure my import job successfully ?

    I'm using sqoop-1.4


Regards 





发自我的 iPhone

Re: Import job failed by chance

Posted by Jarek Jarcec Cecho <ja...@apache.org>.
Hi YouPeng,
It would be helpful if you could share with us the task log from the failed "nodes".

Jarcec

On Tue, May 28, 2013 at 02:58:30PM +0800, YouPeng Yang wrote:
> Hi sir
> 
>   I am sorry I did not gather the detail error logs.
>   It seems to do somthing with YARN,when YARN allocates the resource in my
> cluster node.After the YARN allocates the memory for the sqoop job
> successfully, the job will succeed finally .
>   However I find some nodes are busy ,and  the job  assigned on that nodes
>  will failed after it wait for resource allocation.
>   And the attached container on which the job will run syslog shows the
> waitting .
> 
> 
> Maybe I'm wrong ,please light me
> 
> 
> container syslog:
> 
> 2013-05-24 10:00:10,222 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.LocalContainerLauncher: Processing the event
> EventType: CONTAINER_REMOTE_LAUNCH for container
> container_1369298403742_0153_01_000001 taskAttempt
> attempt_1369298403742_0153_m_000000_0
> 2013-05-24 10:00:10,223 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
> [attempt_1369298403742_0153_m_000000_0] using containerId:
> [container_1369298403742_0153_01_000001 on NM: [wxossetl1:46256]
> 2013-05-24 10:00:10,223 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.LocalContainerLauncher:
> mapreduce.cluster.local.dir for uber task:
> /tmp/nm-local-dir/usercache/hadoop/appcache/application_1369298403742_0153
> 2013-05-24 10:00:10,225 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
> attempt_1369298403742_0153_m_000000_0 TaskAttempt Transitioned from
> ASSIGNED to RUNNING
> 2013-05-24 10:00:10,226 INFO [AsyncDispatcher event handler]
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
> task_1369298403742_0153_m_000000 Task Transitioned from SCHEDULED to RUNNING
> 2013-05-24 10:00:10,237 INFO [uber-SubtaskRunner]
> org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :
> org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin@34e77781
> 2013-05-24 10:00:13,224 INFO [communication thread]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl:
> Ping from attempt_1369298403742_0153_m_000000_0
> 2013-05-24 10:00:16,225 INFO [communication thread]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from
> attempt_1369298403742_0153_m_000000_0
> 2013-05-24 10:00:19,225 INFO [communication thread]
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from
> attempt_1369298403742_0153_m_000000_0
> 
> 
> 
> 2013/5/27 Jarek Jarcec Cecho <ja...@apache.org>
> 
> > Hi sir,
> > It's hard to guess what is happening in your environment. It would be
> > helpful if you could share logs (Sqoop log with parameter --verbose and map
> > task log) of the failed job in order to investigate the issue.
> >
> > Jarcec
> >
> > On Thu, May 23, 2013 at 08:02:11PM +0800, yypvsxf19870706 wrote:
> > > Hi users
> > >
> > >     It is weird that my import job failed sometimes . Then I have to
> > start the import job manually , and it succeed again.
> > >
> > >     Is there anyways to ensure my import job successfully ?
> > >
> > >     I'm using sqoop-1.4
> > >
> > >
> > > Regards
> > >
> > >
> > >
> > >
> > >
> > > 发自我的 iPhone
> >

Re: Import job failed by chance

Posted by YouPeng Yang <yy...@gmail.com>.
Hi sir

  I am sorry I did not gather the detail error logs.
  It seems to do somthing with YARN,when YARN allocates the resource in my
cluster node.After the YARN allocates the memory for the sqoop job
successfully, the job will succeed finally .
  However I find some nodes are busy ,and  the job  assigned on that nodes
 will failed after it wait for resource allocation.
  And the attached container on which the job will run syslog shows the
waitting .


Maybe I'm wrong ,please light me


container syslog:

2013-05-24 10:00:10,222 INFO [uber-SubtaskRunner]
org.apache.hadoop.mapred.LocalContainerLauncher: Processing the event
EventType: CONTAINER_REMOTE_LAUNCH for container
container_1369298403742_0153_01_000001 taskAttempt
attempt_1369298403742_0153_m_000000_0
2013-05-24 10:00:10,223 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt:
[attempt_1369298403742_0153_m_000000_0] using containerId:
[container_1369298403742_0153_01_000001 on NM: [wxossetl1:46256]
2013-05-24 10:00:10,223 INFO [uber-SubtaskRunner]
org.apache.hadoop.mapred.LocalContainerLauncher:
mapreduce.cluster.local.dir for uber task:
/tmp/nm-local-dir/usercache/hadoop/appcache/application_1369298403742_0153
2013-05-24 10:00:10,225 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl:
attempt_1369298403742_0153_m_000000_0 TaskAttempt Transitioned from
ASSIGNED to RUNNING
2013-05-24 10:00:10,226 INFO [AsyncDispatcher event handler]
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl:
task_1369298403742_0153_m_000000 Task Transitioned from SCHEDULED to RUNNING
2013-05-24 10:00:10,237 INFO [uber-SubtaskRunner]
org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin :
org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin@34e77781
2013-05-24 10:00:13,224 INFO [communication thread]
org.apache.hadoop.mapred.TaskAttemptListenerImpl:
Ping from attempt_1369298403742_0153_m_000000_0
2013-05-24 10:00:16,225 INFO [communication thread]
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from
attempt_1369298403742_0153_m_000000_0
2013-05-24 10:00:19,225 INFO [communication thread]
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Ping from
attempt_1369298403742_0153_m_000000_0



2013/5/27 Jarek Jarcec Cecho <ja...@apache.org>

> Hi sir,
> It's hard to guess what is happening in your environment. It would be
> helpful if you could share logs (Sqoop log with parameter --verbose and map
> task log) of the failed job in order to investigate the issue.
>
> Jarcec
>
> On Thu, May 23, 2013 at 08:02:11PM +0800, yypvsxf19870706 wrote:
> > Hi users
> >
> >     It is weird that my import job failed sometimes . Then I have to
> start the import job manually , and it succeed again.
> >
> >     Is there anyways to ensure my import job successfully ?
> >
> >     I'm using sqoop-1.4
> >
> >
> > Regards
> >
> >
> >
> >
> >
> > 发自我的 iPhone
>

Re: Import job failed by chance

Posted by Jarek Jarcec Cecho <ja...@apache.org>.
Hi sir,
It's hard to guess what is happening in your environment. It would be helpful if you could share logs (Sqoop log with parameter --verbose and map task log) of the failed job in order to investigate the issue.

Jarcec

On Thu, May 23, 2013 at 08:02:11PM +0800, yypvsxf19870706 wrote:
> Hi users 
> 
>     It is weird that my import job failed sometimes . Then I have to start the import job manually , and it succeed again.
>      
>     Is there anyways to ensure my import job successfully ?
> 
>     I'm using sqoop-1.4
> 
> 
> Regards 
> 
> 
> 
> 
> 
> 发自我的 iPhone