You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Bradford Stephens <br...@gmail.com> on 2008/03/28 21:23:37 UTC

Re: hadoop 0.15.3 r612257 freezes on reduce task

Hey everyone,

I'm having a similar problem:

"Map output lost, rescheduling:
getMapOutput(task_200803281212_0001_m_000000_2,0) failed :
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
task_200803281212_0001_m_000000_2/file.out.index in any of the
configured local directories"

Then it fails in about 10 minutes. I'm just trying to grep some etexts.

New HDFS installation on 2 nodes (one master, one slave). Ubuntu
Linux, Dell Core 2 Duo processors, Java 1.5.0.

I have a feeling its a configuration issue. Anyone else run into it?


On Tue, Jan 29, 2008 at 11:08 AM, Jason Venner <ja...@attributor.com> wrote:
> We are running under linux with dfs on GiGE lans,  kernel
>  2.6.15-1.2054_FC5smp, with a variety of xeon steppings for our processors.
>  Our replacation factor was set to 3
>
>
>
>  Florian Leibert wrote:
>  > Maybe it helps to know that we're running Hadoop inside amazon's EC2...
>  >
>  > Thanks,
>  > Florian
>  >
>
>  --
>  Jason Venner
>  Attributor - Publish with Confidence <http://www.attributor.com/>
>  Attributor is hiring Hadoop Wranglers, contact if interested
>

Re: hadoop 0.15.3 r612257 freezes on reduce task

Posted by Bradford Stephens <br...@gmail.com>.
Also, I'm running hadoop 0.16.1 :)

On Fri, Mar 28, 2008 at 1:23 PM, Bradford Stephens
<br...@gmail.com> wrote:
> Hey everyone,
>
>  I'm having a similar problem:
>
>  "Map output lost, rescheduling:
>  getMapOutput(task_200803281212_0001_m_000000_2,0) failed :
>
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
>  task_200803281212_0001_m_000000_2/file.out.index in any of the
>  configured local directories"
>
>  Then it fails in about 10 minutes. I'm just trying to grep some etexts.
>
>  New HDFS installation on 2 nodes (one master, one slave). Ubuntu
>  Linux, Dell Core 2 Duo processors, Java 1.5.0.
>
>  I have a feeling its a configuration issue. Anyone else run into it?
>
>
>
>
>  On Tue, Jan 29, 2008 at 11:08 AM, Jason Venner <ja...@attributor.com> wrote:
>  > We are running under linux with dfs on GiGE lans,  kernel
>  >  2.6.15-1.2054_FC5smp, with a variety of xeon steppings for our processors.
>  >  Our replacation factor was set to 3
>  >
>  >
>  >
>  >  Florian Leibert wrote:
>  >  > Maybe it helps to know that we're running Hadoop inside amazon's EC2...
>  >  >
>  >  > Thanks,
>  >  > Florian
>  >  >
>  >
>  >  --
>  >  Jason Venner
>  >  Attributor - Publish with Confidence <http://www.attributor.com/>
>  >  Attributor is hiring Hadoop Wranglers, contact if interested
>  >
>

Re: hadoop 0.15.3 r612257 freezes on reduce task

Posted by Bradford Stephens <br...@gmail.com>.
Thanks for the hint, Deveraj! I was using paths for the
mapred.local.dir that was based on ~/, so I gave it an absolute path
instead. Also, the directory for hadoop.tmp.dir did not exist on one
machine :)


On Fri, Mar 28, 2008 at 2:00 PM, Devaraj Das <dd...@yahoo-inc.com> wrote:
> Hi Bradford,
>  Could you please check what your mapred.local.dir is set to?
>  Devaraj.
>
>
>
>  > -----Original Message-----
>  > From: Bradford Stephens [mailto:bradfordstephens@gmail.com]
>  > Sent: Saturday, March 29, 2008 1:54 AM
>  > To: core-user@hadoop.apache.org
>  > Cc: bstephens@visibletechnologies.com
>  > Subject: Re: hadoop 0.15.3 r612257 freezes on reduce task
>  >
>  > Hey everyone,
>  >
>  > I'm having a similar problem:
>  >
>  > "Map output lost, rescheduling:
>  > getMapOutput(task_200803281212_0001_m_000000_2,0) failed :
>  > org.apache.hadoop.util.DiskChecker$DiskErrorException: Could
>  > not find task_200803281212_0001_m_000000_2/file.out.index in
>  > any of the configured local directories"
>  >
>  > Then it fails in about 10 minutes. I'm just trying to grep
>  > some etexts.
>  >
>  > New HDFS installation on 2 nodes (one master, one slave).
>  > Ubuntu Linux, Dell Core 2 Duo processors, Java 1.5.0.
>  >
>  > I have a feeling its a configuration issue. Anyone else run into it?
>  >
>  >
>  > On Tue, Jan 29, 2008 at 11:08 AM, Jason Venner
>  > <ja...@attributor.com> wrote:
>  > > We are running under linux with dfs on GiGE lans,  kernel
>  > > 2.6.15-1.2054_FC5smp, with a variety of xeon steppings for
>  > our processors.
>  > >  Our replacation factor was set to 3
>  > >
>  > >
>  > >
>  > >  Florian Leibert wrote:
>  > >  > Maybe it helps to know that we're running Hadoop inside
>  > amazon's EC2...
>  > >  >
>  > >  > Thanks,
>  > >  > Florian
>  > >  >
>  > >
>  > >  --
>  > >  Jason Venner
>  > >  Attributor - Publish with Confidence <http://www.attributor.com/>
>  > > Attributor is hiring Hadoop Wranglers, contact if interested
>  > >
>  >
>
>

RE: hadoop 0.15.3 r612257 freezes on reduce task

Posted by Devaraj Das <dd...@yahoo-inc.com>.
Hi Bradford,
Could you please check what your mapred.local.dir is set to?
Devaraj. 

> -----Original Message-----
> From: Bradford Stephens [mailto:bradfordstephens@gmail.com] 
> Sent: Saturday, March 29, 2008 1:54 AM
> To: core-user@hadoop.apache.org
> Cc: bstephens@visibletechnologies.com
> Subject: Re: hadoop 0.15.3 r612257 freezes on reduce task
> 
> Hey everyone,
> 
> I'm having a similar problem:
> 
> "Map output lost, rescheduling:
> getMapOutput(task_200803281212_0001_m_000000_2,0) failed :
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could 
> not find task_200803281212_0001_m_000000_2/file.out.index in 
> any of the configured local directories"
> 
> Then it fails in about 10 minutes. I'm just trying to grep 
> some etexts.
> 
> New HDFS installation on 2 nodes (one master, one slave). 
> Ubuntu Linux, Dell Core 2 Duo processors, Java 1.5.0.
> 
> I have a feeling its a configuration issue. Anyone else run into it?
> 
> 
> On Tue, Jan 29, 2008 at 11:08 AM, Jason Venner 
> <ja...@attributor.com> wrote:
> > We are running under linux with dfs on GiGE lans,  kernel  
> > 2.6.15-1.2054_FC5smp, with a variety of xeon steppings for 
> our processors.
> >  Our replacation factor was set to 3
> >
> >
> >
> >  Florian Leibert wrote:
> >  > Maybe it helps to know that we're running Hadoop inside 
> amazon's EC2...
> >  >
> >  > Thanks,
> >  > Florian
> >  >
> >
> >  --
> >  Jason Venner
> >  Attributor - Publish with Confidence <http://www.attributor.com/>  
> > Attributor is hiring Hadoop Wranglers, contact if interested
> >
>