You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Naresh Dulam <na...@gmail.com> on 2018/01/18 02:18:03 UTC

map task attempt failed after 300

I have sqoop job which is pulling data from huge database table by
filtering data.
I can't create a temporary table with filtered data and use it in sqoop
statement.
I have to query through huge database table only. But my mapreduce task
started as part of sqoop job failing after 4 attempts of maptask and each
attempt timed out after 300secs. Each map task attempts waits for 5 mins to
receive row from table but in my case its taking more than 5 mins to return
first row.

Is there a way to increase task attempt timeout to more than 5 mins?

I set property mapreduce.task.timeout value set to 60 but this value
control if the map task is running and its idle for 60 mins then it will
kill task.



attempt_1514985864009_623381_m_000000_0 FAILED
/container-1-rack-11/host:8042 logs Wed Jan 17 17:43:11 -0600 2018 Wed Jan
17 17:48:38 -0600 2018 5mins, 26sec
AttemptID:attempt_1514985864009_623381_m_000000_0
Timed out after 300 secs
attempt_1514985864009_623381_m_000000_1 FAILED
/container-1-rack-11/host:8042 logs Wed Jan 17 17:48:40 -0600 2018 Wed Jan
17 17:54:08 -0600 2018 5mins, 28sec
AttemptID:attempt_1514985864009_623381_m_000000_1
Timed out after 300 secs
attempt_1514985864009_623381_m_000000_2 FAILED
/container-1-rack-12/host:8042 logs Wed Jan 17 17:54:10 -0600 2018 Wed Jan
17 17:59:38 -0600 2018 5mins, 28sec
AttemptID:attempt_1514985864009_623381_m_000000_2
Timed out after 300 secs
attempt_1514985864009_623381_m_000000_3 FAILED /container-1-rack-8/host:8042
logs Wed Jan 17 17:59:39 -0600 2018 Wed Jan 17 18:05:08 -0600 2018 5mins,
28sec AttemptID:attempt_1514985864009_623381_m_000000_3 Timed out after 300
secs



Thank you,
Naresh

Re: map task attempt failed after 300

Posted by Naresh Dulam <na...@gmail.com>.
these settings helped me resolving problem

 -Dmapreduce.task.timeout=3600000
-Dmapreduce.jobtracker.expire.trackers.interval=3600000


On Wed, Jan 17, 2018 at 8:18 PM, Naresh Dulam <na...@gmail.com> wrote:

> I have sqoop job which is pulling data from huge database table by
> filtering data.
> I can't create a temporary table with filtered data and use it in sqoop
> statement.
> I have to query through huge database table only. But my mapreduce task
> started as part of sqoop job failing after 4 attempts of maptask and each
> attempt timed out after 300secs. Each map task attempts waits for 5 mins to
> receive row from table but in my case its taking more than 5 mins to return
> first row.
>
> Is there a way to increase task attempt timeout to more than 5 mins?
>
> I set property mapreduce.task.timeout value set to 60 but this value
> control if the map task is running and its idle for 60 mins then it will
> kill task.
>
>
>
> attempt_1514985864009_623381_m_000000_0 FAILED
> /container-1-rack-11/host:8042 logs Wed Jan 17 17:43:11 -0600 2018 Wed
> Jan 17 17:48:38 -0600 2018 5mins, 26sec AttemptID:attempt_
> 1514985864009_623381_m_000000_0 Timed out after 300 secs
> attempt_1514985864009_623381_m_000000_1 FAILED
> /container-1-rack-11/host:8042 logs Wed Jan 17 17:48:40 -0600 2018 Wed
> Jan 17 17:54:08 -0600 2018 5mins, 28sec AttemptID:attempt_
> 1514985864009_623381_m_000000_1 Timed out after 300 secs
> attempt_1514985864009_623381_m_000000_2 FAILED
> /container-1-rack-12/host:8042 logs Wed Jan 17 17:54:10 -0600 2018 Wed
> Jan 17 17:59:38 -0600 2018 5mins, 28sec AttemptID:attempt_
> 1514985864009_623381_m_000000_2 Timed out after 300 secs
> attempt_1514985864009_623381_m_000000_3 FAILED
> /container-1-rack-8/host:8042 logs Wed Jan 17 17:59:39 -0600 2018 Wed Jan
> 17 18:05:08 -0600 2018 5mins, 28sec AttemptID:attempt_
> 1514985864009_623381_m_000000_3 Timed out after 300 secs
>
>
>
> Thank you,
> Naresh
>
>
>