You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by madhu phatak <ph...@gmail.com> on 2012/04/02 08:38:28 UTC

Re: 0 tasktrackers in jobtracker but all datanodes present

Hi,
1. Stop the job tracker and task trackers.  - bin/stop-mapred.sh

 2. Disable namenode safemode - bin/hadoop dfsadmin -safemode leave

3. Start the job tracker and tasktrackers again - bin/start-mapred.sh

On Fri, Jan 13, 2012 at 5:20 AM, Ravi Prakash <ra...@gmail.com> wrote:

> Courtesy Kihwal and Bobby
>
> Have you tried increasing the max heap size with -Xmx? and make sure that
> you have swap enabled.
>
> On Wed, Jan 11, 2012 at 6:59 PM, Gaurav Bagga <gb...@gmail.com> wrote:
>
> > Hi
> >
> > hadoop-0.19
> > I have a working hadoop cluster which has been running perfectly for
> > months.
> > But today after restarting the cluster, at jobtracker UI its showing
> state
> > INITIALIZING for a long time and is staying on the same state.
> > The nodes in jobtracker are zero whereas all the nodes are present on the
> > dfs.
> > It says Safe mode is on.
> > grep'ed on slaves and I see the tasktrackers running.
> >
> > In namenode logs i get the following error
> >
> >
> > 2012-01-11 16:50:57,195 WARN  ipc.Server - Out of Memory in server select
> > java.lang.OutOfMemoryError: Java heap space
> >        at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:39)
> >        at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
> >        at
> > org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:804)
> >        at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:400)
> >        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:309)
> >
> > Not sure why the cluster is not coming up
> > -G
> >
>



-- 
https://github.com/zinnia-phatak-dev/Nectar

Re: 0 tasktrackers in jobtracker but all datanodes present

Posted by Bejoy Ks <be...@gmail.com>.
Gaurav
       NN memory might have hit its upper bound. As a bench mark, for every
1 million files/blocks/directories 1GB of memory is required on the NN. The
number of files in your cluster might have grown beyond this treshold. So
the options left for you would be
- If there are large number of small files, use HAR or Sequence File for
grouping the same
- Increase the NN heap

Regards
Bejoy KS

On Mon, Apr 2, 2012 at 12:08 PM, madhu phatak <ph...@gmail.com> wrote:

> Hi,
> 1. Stop the job tracker and task trackers.  - bin/stop-mapred.sh
>
>  2. Disable namenode safemode - bin/hadoop dfsadmin -safemode leave
>
> 3. Start the job tracker and tasktrackers again - bin/start-mapred.sh
>
> On Fri, Jan 13, 2012 at 5:20 AM, Ravi Prakash <ra...@gmail.com>
> wrote:
>
> > Courtesy Kihwal and Bobby
> >
> > Have you tried increasing the max heap size with -Xmx? and make sure that
> > you have swap enabled.
> >
> > On Wed, Jan 11, 2012 at 6:59 PM, Gaurav Bagga <gb...@gmail.com>
> wrote:
> >
> > > Hi
> > >
> > > hadoop-0.19
> > > I have a working hadoop cluster which has been running perfectly for
> > > months.
> > > But today after restarting the cluster, at jobtracker UI its showing
> > state
> > > INITIALIZING for a long time and is staying on the same state.
> > > The nodes in jobtracker are zero whereas all the nodes are present on
> the
> > > dfs.
> > > It says Safe mode is on.
> > > grep'ed on slaves and I see the tasktrackers running.
> > >
> > > In namenode logs i get the following error
> > >
> > >
> > > 2012-01-11 16:50:57,195 WARN  ipc.Server - Out of Memory in server
> select
> > > java.lang.OutOfMemoryError: Java heap space
> > >        at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:39)
> > >        at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
> > >        at
> > > org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:804)
> > >        at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:400)
> > >        at org.apache.hadoop.ipc.Server$Listener.run(Server.java:309)
> > >
> > > Not sure why the cluster is not coming up
> > > -G
> > >
> >
>
>
>
> --
> https://github.com/zinnia-phatak-dev/Nectar
>