You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Saptarshi Guha <sa...@gmail.com> on 2009/08/16 08:19:40 UTC

Jobtracker still finding old task nodes

Hello,
After formatting the HDFS and removing several entries from the slaves
file, when I start up
$HADOOP/bin/start-all.sh (hadoop 0.20)
I get this in jobtracker.log
All the machines save acrux have been removed the from the slaves file.
Both acrux and the jobtracker have the same conf files.

Why does it discover the old machines? Does it automatically discover
new machines?

Thanks and Regards
Saptarshi

WARN org.apache.hadoop.mapred.JobTracker: Serious problem, cannot find
record of 'previous' heartbeat for 'tracker_deneb.'; reinitializing
the tasktracker
WARN org.apache.hadoop.mapred.JobTracker: Serious problem, cannot find
record of 'previous' heartbeat for
'tracker_adhara.stat.purdue.edu:localhost.localdomain/127.0.0.1:37715';
reinitializing the tasktracker
WARN org.apache.hadoop.mapred.JobTracker: Serious problem, cannot find
record of 'previous' heartbeat for 'tracker_castor'; reinitializing
the
tasktracker
2009-08-16 02:15:03,146 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/deneb.
2009-08-16 02:15:03,161 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/adhara
2009-08-16 02:15:03,165 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/castor
2009-08-16 02:15:03,193 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/acrux.
2009-08-16 02:15:15,158 ERROR org.apache.hadoop.mapred.PoolManager:
Failed to reload allocations file - will use existing allocation

Re: Jobtracker still finding old task nodes

Posted by Saptarshi Guha <sa...@gmail.com>.
My mistake, i had assumed the trackers were not running on those machines.
But they were ...

On Sun, Aug 16, 2009 at 2:19 AM, Saptarshi Guha <sa...@gmail.com>wrote:

> Hello,
> After formatting the HDFS and removing several entries from the slaves
> file, when I start up
> $HADOOP/bin/start-all.sh (hadoop 0.20)
> I get this in jobtracker.log
> All the machines save acrux have been removed the from the slaves file.
> Both acrux and the jobtracker have the same conf files.
>
> Why does it discover the old machines? Does it automatically discover
> new machines?
>
> Thanks and Regards
> Saptarshi
>
> WARN org.apache.hadoop.mapred.JobTracker: Serious problem, cannot find
> record of 'previous' heartbeat for 'tracker_deneb.'; reinitializing
> the tasktracker
> WARN org.apache.hadoop.mapred.JobTracker: Serious problem, cannot find
> record of 'previous' heartbeat for
> 'tracker_adhara.stat.purdue.edu:localhost.localdomain/127.0.0.1:37715';
> reinitializing the tasktracker
> WARN org.apache.hadoop.mapred.JobTracker: Serious problem, cannot find
> record of 'previous' heartbeat for 'tracker_castor'; reinitializing
> the
> tasktracker
> 2009-08-16 02:15:03,146 INFO org.apache.hadoop.net.NetworkTopology:
> Adding a new node: /default-rack/deneb.
> 2009-08-16 02:15:03,161 INFO org.apache.hadoop.net.NetworkTopology:
> Adding a new node: /default-rack/adhara
> 2009-08-16 02:15:03,165 INFO org.apache.hadoop.net.NetworkTopology:
> Adding a new node: /default-rack/castor
> 2009-08-16 02:15:03,193 INFO org.apache.hadoop.net.NetworkTopology:
> Adding a new node: /default-rack/acrux.
> 2009-08-16 02:15:15,158 ERROR org.apache.hadoop.mapred.PoolManager:
> Failed to reload allocations file - will use existing allocation
>