You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Manoj Babu <ma...@gmail.com> on 2013/02/01 16:09:19 UTC

Reg Too many fetch-failures Error

Hi All,

I am getting Too many fetch-failures exception.
What might be the reason for this exception, For same size of data i dint
face this error earlier and there is change in code.
How to avoid this?

Thanks in advance.

Cheers!
Manoj.

Re: Reg Too many fetch-failures Error

Posted by Manoj Babu <ma...@gmail.com>.
Hi Vijay,

Thanks for the information.
Few jobs were running in the cluster at the time.

Cheers!
Manoj.


On Fri, Feb 1, 2013 at 11:22 PM, Vijay Thakorlal <vi...@hotmail.com>wrote:

> Hi Manoj,****
>
> ** **
>
> As you may be aware this means the reduces are unable to fetch
> intermediate data from TaskTrackers that ran map tasks – you can try:****
>
> * increasing tasktracker.http.threads so there are more threads to handle
> fetch requests from reduces. ****
>
> * decreasing mapreduce.reduce.parallel.copies : so fewer copy / fetches
> are performed in parallel****
>
> ** **
>
> It could also be due to a temporary DNS issue.****
>
> ** **
>
> See slide 26 of this presentation for potential causes for this message:
> http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera
> ****
>
> ** **
>
> Not sure why you did not receive the problem before but was it the same
> data or different data? Did you have other jobs running on your cluster?**
> **
>
> ** **
>
> Hope that helps****
>
> ** **
>
> Regards****
>
> Vijay****
>
> ** **
>
> *From:* Manoj Babu [mailto:manoj444@gmail.com]
> *Sent:* 01 February 2013 15:09
> *To:* user@hadoop.apache.org
> *Subject:* Reg Too many fetch-failures Error****
>
> ** **
>
> Hi All,****
>
> ** **
>
> I am getting Too many fetch-failures exception.****
>
> What might be the reason for this exception, For same size of data i dint
> face this error earlier and there is change in code.****
>
> How to avoid this?****
>
> ** **
>
> Thanks in advance.****
>
> ** **
>
> Cheers!****
>
> Manoj.****
>

Re: Reg Too many fetch-failures Error

Posted by Manoj Babu <ma...@gmail.com>.
Hi Vijay,

Thanks for the information.
Few jobs were running in the cluster at the time.

Cheers!
Manoj.


On Fri, Feb 1, 2013 at 11:22 PM, Vijay Thakorlal <vi...@hotmail.com>wrote:

> Hi Manoj,****
>
> ** **
>
> As you may be aware this means the reduces are unable to fetch
> intermediate data from TaskTrackers that ran map tasks – you can try:****
>
> * increasing tasktracker.http.threads so there are more threads to handle
> fetch requests from reduces. ****
>
> * decreasing mapreduce.reduce.parallel.copies : so fewer copy / fetches
> are performed in parallel****
>
> ** **
>
> It could also be due to a temporary DNS issue.****
>
> ** **
>
> See slide 26 of this presentation for potential causes for this message:
> http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera
> ****
>
> ** **
>
> Not sure why you did not receive the problem before but was it the same
> data or different data? Did you have other jobs running on your cluster?**
> **
>
> ** **
>
> Hope that helps****
>
> ** **
>
> Regards****
>
> Vijay****
>
> ** **
>
> *From:* Manoj Babu [mailto:manoj444@gmail.com]
> *Sent:* 01 February 2013 15:09
> *To:* user@hadoop.apache.org
> *Subject:* Reg Too many fetch-failures Error****
>
> ** **
>
> Hi All,****
>
> ** **
>
> I am getting Too many fetch-failures exception.****
>
> What might be the reason for this exception, For same size of data i dint
> face this error earlier and there is change in code.****
>
> How to avoid this?****
>
> ** **
>
> Thanks in advance.****
>
> ** **
>
> Cheers!****
>
> Manoj.****
>

Re: Reg Too many fetch-failures Error

Posted by Manoj Babu <ma...@gmail.com>.
Hi Vijay,

Thanks for the information.
Few jobs were running in the cluster at the time.

Cheers!
Manoj.


On Fri, Feb 1, 2013 at 11:22 PM, Vijay Thakorlal <vi...@hotmail.com>wrote:

> Hi Manoj,****
>
> ** **
>
> As you may be aware this means the reduces are unable to fetch
> intermediate data from TaskTrackers that ran map tasks – you can try:****
>
> * increasing tasktracker.http.threads so there are more threads to handle
> fetch requests from reduces. ****
>
> * decreasing mapreduce.reduce.parallel.copies : so fewer copy / fetches
> are performed in parallel****
>
> ** **
>
> It could also be due to a temporary DNS issue.****
>
> ** **
>
> See slide 26 of this presentation for potential causes for this message:
> http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera
> ****
>
> ** **
>
> Not sure why you did not receive the problem before but was it the same
> data or different data? Did you have other jobs running on your cluster?**
> **
>
> ** **
>
> Hope that helps****
>
> ** **
>
> Regards****
>
> Vijay****
>
> ** **
>
> *From:* Manoj Babu [mailto:manoj444@gmail.com]
> *Sent:* 01 February 2013 15:09
> *To:* user@hadoop.apache.org
> *Subject:* Reg Too many fetch-failures Error****
>
> ** **
>
> Hi All,****
>
> ** **
>
> I am getting Too many fetch-failures exception.****
>
> What might be the reason for this exception, For same size of data i dint
> face this error earlier and there is change in code.****
>
> How to avoid this?****
>
> ** **
>
> Thanks in advance.****
>
> ** **
>
> Cheers!****
>
> Manoj.****
>

Re: Reg Too many fetch-failures Error

Posted by Manoj Babu <ma...@gmail.com>.
Hi Vijay,

Thanks for the information.
Few jobs were running in the cluster at the time.

Cheers!
Manoj.


On Fri, Feb 1, 2013 at 11:22 PM, Vijay Thakorlal <vi...@hotmail.com>wrote:

> Hi Manoj,****
>
> ** **
>
> As you may be aware this means the reduces are unable to fetch
> intermediate data from TaskTrackers that ran map tasks – you can try:****
>
> * increasing tasktracker.http.threads so there are more threads to handle
> fetch requests from reduces. ****
>
> * decreasing mapreduce.reduce.parallel.copies : so fewer copy / fetches
> are performed in parallel****
>
> ** **
>
> It could also be due to a temporary DNS issue.****
>
> ** **
>
> See slide 26 of this presentation for potential causes for this message:
> http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera
> ****
>
> ** **
>
> Not sure why you did not receive the problem before but was it the same
> data or different data? Did you have other jobs running on your cluster?**
> **
>
> ** **
>
> Hope that helps****
>
> ** **
>
> Regards****
>
> Vijay****
>
> ** **
>
> *From:* Manoj Babu [mailto:manoj444@gmail.com]
> *Sent:* 01 February 2013 15:09
> *To:* user@hadoop.apache.org
> *Subject:* Reg Too many fetch-failures Error****
>
> ** **
>
> Hi All,****
>
> ** **
>
> I am getting Too many fetch-failures exception.****
>
> What might be the reason for this exception, For same size of data i dint
> face this error earlier and there is change in code.****
>
> How to avoid this?****
>
> ** **
>
> Thanks in advance.****
>
> ** **
>
> Cheers!****
>
> Manoj.****
>

RE: Reg Too many fetch-failures Error

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Manoj,

 

As you may be aware this means the reduces are unable to fetch intermediate
data from TaskTrackers that ran map tasks - you can try:

* increasing tasktracker.http.threads so there are more threads to handle
fetch requests from reduces. 

* decreasing mapreduce.reduce.parallel.copies : so fewer copy / fetches are
performed in parallel

 

It could also be due to a temporary DNS issue.

 

See slide 26 of this presentation for potential causes for this message:
http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-clou
dera

 

Not sure why you did not receive the problem before but was it the same data
or different data? Did you have other jobs running on your cluster?

 

Hope that helps

 

Regards

Vijay

 

From: Manoj Babu [mailto:manoj444@gmail.com] 
Sent: 01 February 2013 15:09
To: user@hadoop.apache.org
Subject: Reg Too many fetch-failures Error

 

Hi All,

 

I am getting Too many fetch-failures exception.

What might be the reason for this exception, For same size of data i dint
face this error earlier and there is change in code.

How to avoid this?

 

Thanks in advance.

 

Cheers!

Manoj.


RE: Reg Too many fetch-failures Error

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Manoj,

 

As you may be aware this means the reduces are unable to fetch intermediate
data from TaskTrackers that ran map tasks - you can try:

* increasing tasktracker.http.threads so there are more threads to handle
fetch requests from reduces. 

* decreasing mapreduce.reduce.parallel.copies : so fewer copy / fetches are
performed in parallel

 

It could also be due to a temporary DNS issue.

 

See slide 26 of this presentation for potential causes for this message:
http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-clou
dera

 

Not sure why you did not receive the problem before but was it the same data
or different data? Did you have other jobs running on your cluster?

 

Hope that helps

 

Regards

Vijay

 

From: Manoj Babu [mailto:manoj444@gmail.com] 
Sent: 01 February 2013 15:09
To: user@hadoop.apache.org
Subject: Reg Too many fetch-failures Error

 

Hi All,

 

I am getting Too many fetch-failures exception.

What might be the reason for this exception, For same size of data i dint
face this error earlier and there is change in code.

How to avoid this?

 

Thanks in advance.

 

Cheers!

Manoj.


RE: Reg Too many fetch-failures Error

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Manoj,

 

As you may be aware this means the reduces are unable to fetch intermediate
data from TaskTrackers that ran map tasks - you can try:

* increasing tasktracker.http.threads so there are more threads to handle
fetch requests from reduces. 

* decreasing mapreduce.reduce.parallel.copies : so fewer copy / fetches are
performed in parallel

 

It could also be due to a temporary DNS issue.

 

See slide 26 of this presentation for potential causes for this message:
http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-clou
dera

 

Not sure why you did not receive the problem before but was it the same data
or different data? Did you have other jobs running on your cluster?

 

Hope that helps

 

Regards

Vijay

 

From: Manoj Babu [mailto:manoj444@gmail.com] 
Sent: 01 February 2013 15:09
To: user@hadoop.apache.org
Subject: Reg Too many fetch-failures Error

 

Hi All,

 

I am getting Too many fetch-failures exception.

What might be the reason for this exception, For same size of data i dint
face this error earlier and there is change in code.

How to avoid this?

 

Thanks in advance.

 

Cheers!

Manoj.


RE: Reg Too many fetch-failures Error

Posted by Vijay Thakorlal <vi...@hotmail.com>.
Hi Manoj,

 

As you may be aware this means the reduces are unable to fetch intermediate
data from TaskTrackers that ran map tasks - you can try:

* increasing tasktracker.http.threads so there are more threads to handle
fetch requests from reduces. 

* decreasing mapreduce.reduce.parallel.copies : so fewer copy / fetches are
performed in parallel

 

It could also be due to a temporary DNS issue.

 

See slide 26 of this presentation for potential causes for this message:
http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-clou
dera

 

Not sure why you did not receive the problem before but was it the same data
or different data? Did you have other jobs running on your cluster?

 

Hope that helps

 

Regards

Vijay

 

From: Manoj Babu [mailto:manoj444@gmail.com] 
Sent: 01 February 2013 15:09
To: user@hadoop.apache.org
Subject: Reg Too many fetch-failures Error

 

Hi All,

 

I am getting Too many fetch-failures exception.

What might be the reason for this exception, For same size of data i dint
face this error earlier and there is change in code.

How to avoid this?

 

Thanks in advance.

 

Cheers!

Manoj.