You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Charles AI <ha...@gmail.com> on 2012/08/27 09:03:32 UTC

Why cannot I start namenode or localhost:50070 ?

Hi All,
I was running a cluster of one master and 4 slaves. I copied the
hadoop_install folder from the master to all 4 slaves, and configured them
well.
How ever when i sh start-all.sh from the master machine. It shows below:

starting namenode, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
slave2: ssh: connect to host slave2 port 22: Connection refused
master: starting datanode, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
slave4: starting datanode, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
slave3: starting datanode, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
slave1: starting datanode, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
master: starting secondarynamenode, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
starting jobtracker, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
slave2: ssh: connect to host slave2 port 22: Connection refused
slave4: starting tasktracker, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
master: starting tasktracker, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
slave3: starting tasktracker, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
slave1: starting tasktracker, logging to
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out

I know that slave2 is not on. But that should not be the problem. After
this , I typed 'jps' in the master's shell, and it shows:
6907 Jps
6306 DataNode
6838 TaskTracker
6612 JobTracker
6533 SecondaryNameNode

And when I opened this link "localhost:50030",the page said :
master Hadoop Map/Reduce Administration
Quick Links <http://master:50030/jobtracker.jsp#quicklinks>
*State:* INITIALIZING
*Started:* Mon Aug 27 14:54:46 CST 2012
*Version:* 0.20.2, r911707
*Compiled:* Fri Feb 19 08:07:34 UTC 2010 by chrisdo
*Identifier:* 201208271454

I don't quite get what the "State : INITIALIZING" means. Additionally, i
cannot open "localhost:50070".

So, Any suggestions ?

Thanks in advance.
CH
-- 
in a hadoop learning cycle

Re: Why cannot I start namenode or localhost:50070 ?

Posted by TianYi Zhu <ti...@facilitatedigital.com>.
Hi Charles,

map/reduce(jobtracker/tasktrackers, localhost:50030) is based on
hdfs(namenode/datanodes, localhost:50070) or local file system.
It seems there is something wrong with the hdfs, so the map/reduce is
blocked and shows INITIALIZING, please check the log of namenode(
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
).


On Mon, Aug 27, 2012 at 5:03 PM, Charles AI <ha...@gmail.com> wrote:

> Hi All,
> I was running a cluster of one master and 4 slaves. I copied the
> hadoop_install folder from the master to all 4 slaves, and configured them
> well.
> How ever when i sh start-all.sh from the master machine. It shows below:
>
> starting namenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> master: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> slave4: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> slave3: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> slave1: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> master: starting secondarynamenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> starting jobtracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> slave4: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> master: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> slave3: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> slave1: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
>
> I know that slave2 is not on. But that should not be the problem. After
> this , I typed 'jps' in the master's shell, and it shows:
> 6907 Jps
> 6306 DataNode
> 6838 TaskTracker
> 6612 JobTracker
> 6533 SecondaryNameNode
>
> And when I opened this link "localhost:50030",the page said :
> master Hadoop Map/Reduce Administration
>  Quick Links <http://master:50030/jobtracker.jsp#quicklinks>
> *State:* INITIALIZING
> *Started:* Mon Aug 27 14:54:46 CST 2012
> *Version:* 0.20.2, r911707
> *Compiled:* Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> *Identifier:* 201208271454
>
> I don't quite get what the "State : INITIALIZING" means. Additionally, i
> cannot open "localhost:50070".
>
> So, Any suggestions ?
>
> Thanks in advance.
> CH
> --
> in a hadoop learning cycle
>

Re: Why cannot I start namenode or localhost:50070 ?

Posted by TianYi Zhu <ti...@facilitatedigital.com>.
Hi Charles,

map/reduce(jobtracker/tasktrackers, localhost:50030) is based on
hdfs(namenode/datanodes, localhost:50070) or local file system.
It seems there is something wrong with the hdfs, so the map/reduce is
blocked and shows INITIALIZING, please check the log of namenode(
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
).


On Mon, Aug 27, 2012 at 5:03 PM, Charles AI <ha...@gmail.com> wrote:

> Hi All,
> I was running a cluster of one master and 4 slaves. I copied the
> hadoop_install folder from the master to all 4 slaves, and configured them
> well.
> How ever when i sh start-all.sh from the master machine. It shows below:
>
> starting namenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> master: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> slave4: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> slave3: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> slave1: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> master: starting secondarynamenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> starting jobtracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> slave4: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> master: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> slave3: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> slave1: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
>
> I know that slave2 is not on. But that should not be the problem. After
> this , I typed 'jps' in the master's shell, and it shows:
> 6907 Jps
> 6306 DataNode
> 6838 TaskTracker
> 6612 JobTracker
> 6533 SecondaryNameNode
>
> And when I opened this link "localhost:50030",the page said :
> master Hadoop Map/Reduce Administration
>  Quick Links <http://master:50030/jobtracker.jsp#quicklinks>
> *State:* INITIALIZING
> *Started:* Mon Aug 27 14:54:46 CST 2012
> *Version:* 0.20.2, r911707
> *Compiled:* Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> *Identifier:* 201208271454
>
> I don't quite get what the "State : INITIALIZING" means. Additionally, i
> cannot open "localhost:50070".
>
> So, Any suggestions ?
>
> Thanks in advance.
> CH
> --
> in a hadoop learning cycle
>

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Charles AI <ha...@gmail.com>.
Yeah, thank you. Both NN log and DN log on the master machine are empty
files, having a size of 0.

On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:

> Charles,
>
> Can you check your NN logs to see if it is properly up?
>
> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
> > Hi All,
> > I was running a cluster of one master and 4 slaves. I copied the
> > hadoop_install folder from the master to all 4 slaves, and configured
> them
> > well.
> > How ever when i sh start-all.sh from the master machine. It shows below:
> >
> > starting namenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > master: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> > slave4: starting datanode, logging to
> > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> > slave3: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> > slave1: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> > master: starting secondarynamenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> > starting jobtracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > slave4: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> > master: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> > slave3: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> > slave1: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
> >
> > I know that slave2 is not on. But that should not be the problem. After
> this
> > , I typed 'jps' in the master's shell, and it shows:
> > 6907 Jps
> > 6306 DataNode
> > 6838 TaskTracker
> > 6612 JobTracker
> > 6533 SecondaryNameNode
> >
> > And when I opened this link "localhost:50030",the page said :
> > master Hadoop Map/Reduce Administration
> > Quick Links
> > State: INITIALIZING
> > Started: Mon Aug 27 14:54:46 CST 2012
> > Version: 0.20.2, r911707
> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> > Identifier: 201208271454
> >
> > I don't quite get what the "State : INITIALIZING" means. Additionally, i
> > cannot open "localhost:50070".
> >
> > So, Any suggestions ?
> >
> > Thanks in advance.
> > CH
> > --
> > in a hadoop learning cycle
>
>
>
> --
> Harsh J
>



-- 
in a hadoop learning cycle

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Charles AI <ha...@gmail.com>.
hi Mohammad,
Thank you for reminding.
I have checked the two directories, and set them as /home/hadoopfs/data and
/home/hadoopfs/name, not under /tmp.
So far, my problem has already been solved. Thank you.

On Mon, Aug 27, 2012 at 4:31 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Charles,
>
>    Have you added dfs.name. dir and dfs.data. dir props in your
> hdfs-site.xml file??Values of these props default to the /tmp dir, so at
> each restart both data and meta info is lost.
>
>
> On Monday, August 27, 2012, Charles AI <ha...@gmail.com> wrote:
> > thank you guys.
> > the logs say my dfs.name.dir is not consistent:
> > Directory /home/hadoop/hadoopfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
> > And the namenode starts after "hadoop namenode format".
> >
> >
> > On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:
> >>
> >> Charles,
> >>
> >> Can you check your NN logs to see if it is properly up?
> >>
> >> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com>
> wrote:
> >> > Hi All,
> >> > I was running a cluster of one master and 4 slaves. I copied the
> >> > hadoop_install folder from the master to all 4 slaves, and configured
> them
> >> > well.
> >> > How ever when i sh start-all.sh from the master machine. It shows
> below:
> >> >
> >> > starting namenode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> >> > slave2: ssh: connect to host slave2 port 22: Connection refused
> >> > master: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> >> > slave4: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> >> > slave3: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> >> > slave1: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> >> > master: starting secondarynamenode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> >> > starting jobtracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> >> > slave2: ssh: connect to host slave2 port 22: Connection refused
> >> > slave4: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> >> > master: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> >> > slave3: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> >> > slave1: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
> >> >
> >> > I know that slave2 is not on. But that should not be the problem.
> After this
> >> > , I typed 'jps' in the master's shell, and it shows:
> >> > 6907 Jps
> >> > 6306 DataNode
> >> > 6838 TaskTracker
> >> > 6612 JobTracker
> >> > 6533 SecondaryNameNode
> >> >
> >> > And when I opened this link "localhost:50030",the page said :
> >> > master Hadoop Map/Reduce Administration
> >> > Quick Links
> >> > State: INITIALIZING
> >> > Started: Mon Aug 27 14:54:46 CST 2012
> >> > Version: 0.20.2, r911707
> >> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> >> > Identifier: 201208271454
> >> >
> >> > I don't quite get what the "State : INITIALIZING" means.
> Additionally, i
> >> > cannot open "localhost:50070".
> >> >
> >> > So, Any suggestions ?
> >> >
> >> > Thanks in advance.
> >> > CH
> >> > --
> >> > in a hadoop learning cycle
> >>
> >>
> >>
> >> --
> >> Harsh J
> >
> >
> >
> > --
> > in a hadoop learning cycle
> >
>
> --
> Regards,
>     Mohammad Tariq
>
>


-- 
in a hadoop learning cycle

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Charles AI <ha...@gmail.com>.
hi Mohammad,
Thank you for reminding.
I have checked the two directories, and set them as /home/hadoopfs/data and
/home/hadoopfs/name, not under /tmp.
So far, my problem has already been solved. Thank you.

On Mon, Aug 27, 2012 at 4:31 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Charles,
>
>    Have you added dfs.name. dir and dfs.data. dir props in your
> hdfs-site.xml file??Values of these props default to the /tmp dir, so at
> each restart both data and meta info is lost.
>
>
> On Monday, August 27, 2012, Charles AI <ha...@gmail.com> wrote:
> > thank you guys.
> > the logs say my dfs.name.dir is not consistent:
> > Directory /home/hadoop/hadoopfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
> > And the namenode starts after "hadoop namenode format".
> >
> >
> > On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:
> >>
> >> Charles,
> >>
> >> Can you check your NN logs to see if it is properly up?
> >>
> >> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com>
> wrote:
> >> > Hi All,
> >> > I was running a cluster of one master and 4 slaves. I copied the
> >> > hadoop_install folder from the master to all 4 slaves, and configured
> them
> >> > well.
> >> > How ever when i sh start-all.sh from the master machine. It shows
> below:
> >> >
> >> > starting namenode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> >> > slave2: ssh: connect to host slave2 port 22: Connection refused
> >> > master: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> >> > slave4: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> >> > slave3: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> >> > slave1: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> >> > master: starting secondarynamenode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> >> > starting jobtracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> >> > slave2: ssh: connect to host slave2 port 22: Connection refused
> >> > slave4: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> >> > master: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> >> > slave3: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> >> > slave1: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
> >> >
> >> > I know that slave2 is not on. But that should not be the problem.
> After this
> >> > , I typed 'jps' in the master's shell, and it shows:
> >> > 6907 Jps
> >> > 6306 DataNode
> >> > 6838 TaskTracker
> >> > 6612 JobTracker
> >> > 6533 SecondaryNameNode
> >> >
> >> > And when I opened this link "localhost:50030",the page said :
> >> > master Hadoop Map/Reduce Administration
> >> > Quick Links
> >> > State: INITIALIZING
> >> > Started: Mon Aug 27 14:54:46 CST 2012
> >> > Version: 0.20.2, r911707
> >> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> >> > Identifier: 201208271454
> >> >
> >> > I don't quite get what the "State : INITIALIZING" means.
> Additionally, i
> >> > cannot open "localhost:50070".
> >> >
> >> > So, Any suggestions ?
> >> >
> >> > Thanks in advance.
> >> > CH
> >> > --
> >> > in a hadoop learning cycle
> >>
> >>
> >>
> >> --
> >> Harsh J
> >
> >
> >
> > --
> > in a hadoop learning cycle
> >
>
> --
> Regards,
>     Mohammad Tariq
>
>


-- 
in a hadoop learning cycle

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Charles AI <ha...@gmail.com>.
hi Mohammad,
Thank you for reminding.
I have checked the two directories, and set them as /home/hadoopfs/data and
/home/hadoopfs/name, not under /tmp.
So far, my problem has already been solved. Thank you.

On Mon, Aug 27, 2012 at 4:31 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Charles,
>
>    Have you added dfs.name. dir and dfs.data. dir props in your
> hdfs-site.xml file??Values of these props default to the /tmp dir, so at
> each restart both data and meta info is lost.
>
>
> On Monday, August 27, 2012, Charles AI <ha...@gmail.com> wrote:
> > thank you guys.
> > the logs say my dfs.name.dir is not consistent:
> > Directory /home/hadoop/hadoopfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
> > And the namenode starts after "hadoop namenode format".
> >
> >
> > On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:
> >>
> >> Charles,
> >>
> >> Can you check your NN logs to see if it is properly up?
> >>
> >> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com>
> wrote:
> >> > Hi All,
> >> > I was running a cluster of one master and 4 slaves. I copied the
> >> > hadoop_install folder from the master to all 4 slaves, and configured
> them
> >> > well.
> >> > How ever when i sh start-all.sh from the master machine. It shows
> below:
> >> >
> >> > starting namenode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> >> > slave2: ssh: connect to host slave2 port 22: Connection refused
> >> > master: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> >> > slave4: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> >> > slave3: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> >> > slave1: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> >> > master: starting secondarynamenode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> >> > starting jobtracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> >> > slave2: ssh: connect to host slave2 port 22: Connection refused
> >> > slave4: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> >> > master: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> >> > slave3: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> >> > slave1: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
> >> >
> >> > I know that slave2 is not on. But that should not be the problem.
> After this
> >> > , I typed 'jps' in the master's shell, and it shows:
> >> > 6907 Jps
> >> > 6306 DataNode
> >> > 6838 TaskTracker
> >> > 6612 JobTracker
> >> > 6533 SecondaryNameNode
> >> >
> >> > And when I opened this link "localhost:50030",the page said :
> >> > master Hadoop Map/Reduce Administration
> >> > Quick Links
> >> > State: INITIALIZING
> >> > Started: Mon Aug 27 14:54:46 CST 2012
> >> > Version: 0.20.2, r911707
> >> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> >> > Identifier: 201208271454
> >> >
> >> > I don't quite get what the "State : INITIALIZING" means.
> Additionally, i
> >> > cannot open "localhost:50070".
> >> >
> >> > So, Any suggestions ?
> >> >
> >> > Thanks in advance.
> >> > CH
> >> > --
> >> > in a hadoop learning cycle
> >>
> >>
> >>
> >> --
> >> Harsh J
> >
> >
> >
> > --
> > in a hadoop learning cycle
> >
>
> --
> Regards,
>     Mohammad Tariq
>
>


-- 
in a hadoop learning cycle

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Charles AI <ha...@gmail.com>.
hi Mohammad,
Thank you for reminding.
I have checked the two directories, and set them as /home/hadoopfs/data and
/home/hadoopfs/name, not under /tmp.
So far, my problem has already been solved. Thank you.

On Mon, Aug 27, 2012 at 4:31 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Charles,
>
>    Have you added dfs.name. dir and dfs.data. dir props in your
> hdfs-site.xml file??Values of these props default to the /tmp dir, so at
> each restart both data and meta info is lost.
>
>
> On Monday, August 27, 2012, Charles AI <ha...@gmail.com> wrote:
> > thank you guys.
> > the logs say my dfs.name.dir is not consistent:
> > Directory /home/hadoop/hadoopfs/name is in an inconsistent state:
> storage directory does not exist or is not accessible.
> > And the namenode starts after "hadoop namenode format".
> >
> >
> > On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:
> >>
> >> Charles,
> >>
> >> Can you check your NN logs to see if it is properly up?
> >>
> >> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com>
> wrote:
> >> > Hi All,
> >> > I was running a cluster of one master and 4 slaves. I copied the
> >> > hadoop_install folder from the master to all 4 slaves, and configured
> them
> >> > well.
> >> > How ever when i sh start-all.sh from the master machine. It shows
> below:
> >> >
> >> > starting namenode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> >> > slave2: ssh: connect to host slave2 port 22: Connection refused
> >> > master: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> >> > slave4: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> >> > slave3: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> >> > slave1: starting datanode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> >> > master: starting secondarynamenode, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> >> > starting jobtracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> >> > slave2: ssh: connect to host slave2 port 22: Connection refused
> >> > slave4: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> >> > master: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> >> > slave3: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> >> > slave1: starting tasktracker, logging to
> >> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
> >> >
> >> > I know that slave2 is not on. But that should not be the problem.
> After this
> >> > , I typed 'jps' in the master's shell, and it shows:
> >> > 6907 Jps
> >> > 6306 DataNode
> >> > 6838 TaskTracker
> >> > 6612 JobTracker
> >> > 6533 SecondaryNameNode
> >> >
> >> > And when I opened this link "localhost:50030",the page said :
> >> > master Hadoop Map/Reduce Administration
> >> > Quick Links
> >> > State: INITIALIZING
> >> > Started: Mon Aug 27 14:54:46 CST 2012
> >> > Version: 0.20.2, r911707
> >> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> >> > Identifier: 201208271454
> >> >
> >> > I don't quite get what the "State : INITIALIZING" means.
> Additionally, i
> >> > cannot open "localhost:50070".
> >> >
> >> > So, Any suggestions ?
> >> >
> >> > Thanks in advance.
> >> > CH
> >> > --
> >> > in a hadoop learning cycle
> >>
> >>
> >>
> >> --
> >> Harsh J
> >
> >
> >
> > --
> > in a hadoop learning cycle
> >
>
> --
> Regards,
>     Mohammad Tariq
>
>


-- 
in a hadoop learning cycle

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Charles,

   Have you added dfs.name. dir and dfs.data. dir props in your
hdfs-site.xml file??Values of these props default to the /tmp dir, so at
each restart both data and meta info is lost.


On Monday, August 27, 2012, Charles AI <ha...@gmail.com> wrote:
> thank you guys.
> the logs say my dfs.name.dir is not consistent:
> Directory /home/hadoop/hadoopfs/name is in an inconsistent state: storage
directory does not exist or is not accessible.
> And the namenode starts after "hadoop namenode format".
>
>
> On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:
>>
>> Charles,
>>
>> Can you check your NN logs to see if it is properly up?
>>
>> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
>> > Hi All,
>> > I was running a cluster of one master and 4 slaves. I copied the
>> > hadoop_install folder from the master to all 4 slaves, and configured
them
>> > well.
>> > How ever when i sh start-all.sh from the master machine. It shows
below:
>> >
>> > starting namenode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
>> > slave2: ssh: connect to host slave2 port 22: Connection refused
>> > master: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
>> > slave4: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
>> > slave3: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
>> > slave1: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
>> > master: starting secondarynamenode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
>> > starting jobtracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
>> > slave2: ssh: connect to host slave2 port 22: Connection refused
>> > slave4: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
>> > master: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
>> > slave3: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
>> > slave1: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
>> >
>> > I know that slave2 is not on. But that should not be the problem.
After this
>> > , I typed 'jps' in the master's shell, and it shows:
>> > 6907 Jps
>> > 6306 DataNode
>> > 6838 TaskTracker
>> > 6612 JobTracker
>> > 6533 SecondaryNameNode
>> >
>> > And when I opened this link "localhost:50030",the page said :
>> > master Hadoop Map/Reduce Administration
>> > Quick Links
>> > State: INITIALIZING
>> > Started: Mon Aug 27 14:54:46 CST 2012
>> > Version: 0.20.2, r911707
>> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
>> > Identifier: 201208271454
>> >
>> > I don't quite get what the "State : INITIALIZING" means. Additionally,
i
>> > cannot open "localhost:50070".
>> >
>> > So, Any suggestions ?
>> >
>> > Thanks in advance.
>> > CH
>> > --
>> > in a hadoop learning cycle
>>
>>
>>
>> --
>> Harsh J
>
>
>
> --
> in a hadoop learning cycle
>

-- 
Regards,
    Mohammad Tariq

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Charles,

   Have you added dfs.name. dir and dfs.data. dir props in your
hdfs-site.xml file??Values of these props default to the /tmp dir, so at
each restart both data and meta info is lost.


On Monday, August 27, 2012, Charles AI <ha...@gmail.com> wrote:
> thank you guys.
> the logs say my dfs.name.dir is not consistent:
> Directory /home/hadoop/hadoopfs/name is in an inconsistent state: storage
directory does not exist or is not accessible.
> And the namenode starts after "hadoop namenode format".
>
>
> On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:
>>
>> Charles,
>>
>> Can you check your NN logs to see if it is properly up?
>>
>> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
>> > Hi All,
>> > I was running a cluster of one master and 4 slaves. I copied the
>> > hadoop_install folder from the master to all 4 slaves, and configured
them
>> > well.
>> > How ever when i sh start-all.sh from the master machine. It shows
below:
>> >
>> > starting namenode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
>> > slave2: ssh: connect to host slave2 port 22: Connection refused
>> > master: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
>> > slave4: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
>> > slave3: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
>> > slave1: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
>> > master: starting secondarynamenode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
>> > starting jobtracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
>> > slave2: ssh: connect to host slave2 port 22: Connection refused
>> > slave4: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
>> > master: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
>> > slave3: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
>> > slave1: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
>> >
>> > I know that slave2 is not on. But that should not be the problem.
After this
>> > , I typed 'jps' in the master's shell, and it shows:
>> > 6907 Jps
>> > 6306 DataNode
>> > 6838 TaskTracker
>> > 6612 JobTracker
>> > 6533 SecondaryNameNode
>> >
>> > And when I opened this link "localhost:50030",the page said :
>> > master Hadoop Map/Reduce Administration
>> > Quick Links
>> > State: INITIALIZING
>> > Started: Mon Aug 27 14:54:46 CST 2012
>> > Version: 0.20.2, r911707
>> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
>> > Identifier: 201208271454
>> >
>> > I don't quite get what the "State : INITIALIZING" means. Additionally,
i
>> > cannot open "localhost:50070".
>> >
>> > So, Any suggestions ?
>> >
>> > Thanks in advance.
>> > CH
>> > --
>> > in a hadoop learning cycle
>>
>>
>>
>> --
>> Harsh J
>
>
>
> --
> in a hadoop learning cycle
>

-- 
Regards,
    Mohammad Tariq

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Charles,

   Have you added dfs.name. dir and dfs.data. dir props in your
hdfs-site.xml file??Values of these props default to the /tmp dir, so at
each restart both data and meta info is lost.


On Monday, August 27, 2012, Charles AI <ha...@gmail.com> wrote:
> thank you guys.
> the logs say my dfs.name.dir is not consistent:
> Directory /home/hadoop/hadoopfs/name is in an inconsistent state: storage
directory does not exist or is not accessible.
> And the namenode starts after "hadoop namenode format".
>
>
> On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:
>>
>> Charles,
>>
>> Can you check your NN logs to see if it is properly up?
>>
>> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
>> > Hi All,
>> > I was running a cluster of one master and 4 slaves. I copied the
>> > hadoop_install folder from the master to all 4 slaves, and configured
them
>> > well.
>> > How ever when i sh start-all.sh from the master machine. It shows
below:
>> >
>> > starting namenode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
>> > slave2: ssh: connect to host slave2 port 22: Connection refused
>> > master: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
>> > slave4: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
>> > slave3: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
>> > slave1: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
>> > master: starting secondarynamenode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
>> > starting jobtracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
>> > slave2: ssh: connect to host slave2 port 22: Connection refused
>> > slave4: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
>> > master: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
>> > slave3: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
>> > slave1: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
>> >
>> > I know that slave2 is not on. But that should not be the problem.
After this
>> > , I typed 'jps' in the master's shell, and it shows:
>> > 6907 Jps
>> > 6306 DataNode
>> > 6838 TaskTracker
>> > 6612 JobTracker
>> > 6533 SecondaryNameNode
>> >
>> > And when I opened this link "localhost:50030",the page said :
>> > master Hadoop Map/Reduce Administration
>> > Quick Links
>> > State: INITIALIZING
>> > Started: Mon Aug 27 14:54:46 CST 2012
>> > Version: 0.20.2, r911707
>> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
>> > Identifier: 201208271454
>> >
>> > I don't quite get what the "State : INITIALIZING" means. Additionally,
i
>> > cannot open "localhost:50070".
>> >
>> > So, Any suggestions ?
>> >
>> > Thanks in advance.
>> > CH
>> > --
>> > in a hadoop learning cycle
>>
>>
>>
>> --
>> Harsh J
>
>
>
> --
> in a hadoop learning cycle
>

-- 
Regards,
    Mohammad Tariq

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Charles,

   Have you added dfs.name. dir and dfs.data. dir props in your
hdfs-site.xml file??Values of these props default to the /tmp dir, so at
each restart both data and meta info is lost.


On Monday, August 27, 2012, Charles AI <ha...@gmail.com> wrote:
> thank you guys.
> the logs say my dfs.name.dir is not consistent:
> Directory /home/hadoop/hadoopfs/name is in an inconsistent state: storage
directory does not exist or is not accessible.
> And the namenode starts after "hadoop namenode format".
>
>
> On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:
>>
>> Charles,
>>
>> Can you check your NN logs to see if it is properly up?
>>
>> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
>> > Hi All,
>> > I was running a cluster of one master and 4 slaves. I copied the
>> > hadoop_install folder from the master to all 4 slaves, and configured
them
>> > well.
>> > How ever when i sh start-all.sh from the master machine. It shows
below:
>> >
>> > starting namenode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
>> > slave2: ssh: connect to host slave2 port 22: Connection refused
>> > master: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
>> > slave4: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
>> > slave3: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
>> > slave1: starting datanode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
>> > master: starting secondarynamenode, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
>> > starting jobtracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
>> > slave2: ssh: connect to host slave2 port 22: Connection refused
>> > slave4: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
>> > master: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
>> > slave3: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
>> > slave1: starting tasktracker, logging to
>> >
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
>> >
>> > I know that slave2 is not on. But that should not be the problem.
After this
>> > , I typed 'jps' in the master's shell, and it shows:
>> > 6907 Jps
>> > 6306 DataNode
>> > 6838 TaskTracker
>> > 6612 JobTracker
>> > 6533 SecondaryNameNode
>> >
>> > And when I opened this link "localhost:50030",the page said :
>> > master Hadoop Map/Reduce Administration
>> > Quick Links
>> > State: INITIALIZING
>> > Started: Mon Aug 27 14:54:46 CST 2012
>> > Version: 0.20.2, r911707
>> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
>> > Identifier: 201208271454
>> >
>> > I don't quite get what the "State : INITIALIZING" means. Additionally,
i
>> > cannot open "localhost:50070".
>> >
>> > So, Any suggestions ?
>> >
>> > Thanks in advance.
>> > CH
>> > --
>> > in a hadoop learning cycle
>>
>>
>>
>> --
>> Harsh J
>
>
>
> --
> in a hadoop learning cycle
>

-- 
Regards,
    Mohammad Tariq

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Charles AI <ha...@gmail.com>.
thank you guys.
the logs say my dfs.name.dir is not consistent:
Directory /home/hadoop/hadoopfs/name is in an inconsistent state: storage
directory does not exist or is not accessible.

And the namenode starts after "hadoop namenode format".



On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:

> Charles,
>
> Can you check your NN logs to see if it is properly up?
>
> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
> > Hi All,
> > I was running a cluster of one master and 4 slaves. I copied the
> > hadoop_install folder from the master to all 4 slaves, and configured
> them
> > well.
> > How ever when i sh start-all.sh from the master machine. It shows below:
> >
> > starting namenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > master: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> > slave4: starting datanode, logging to
> > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> > slave3: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> > slave1: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> > master: starting secondarynamenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> > starting jobtracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > slave4: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> > master: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> > slave3: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> > slave1: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
> >
> > I know that slave2 is not on. But that should not be the problem. After
> this
> > , I typed 'jps' in the master's shell, and it shows:
> > 6907 Jps
> > 6306 DataNode
> > 6838 TaskTracker
> > 6612 JobTracker
> > 6533 SecondaryNameNode
> >
> > And when I opened this link "localhost:50030",the page said :
> > master Hadoop Map/Reduce Administration
> > Quick Links
> > State: INITIALIZING
> > Started: Mon Aug 27 14:54:46 CST 2012
> > Version: 0.20.2, r911707
> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> > Identifier: 201208271454
> >
> > I don't quite get what the "State : INITIALIZING" means. Additionally, i
> > cannot open "localhost:50070".
> >
> > So, Any suggestions ?
> >
> > Thanks in advance.
> > CH
> > --
> > in a hadoop learning cycle
>
>
>
> --
> Harsh J
>



-- 
in a hadoop learning cycle

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Charles AI <ha...@gmail.com>.
thank you guys.
the logs say my dfs.name.dir is not consistent:
Directory /home/hadoop/hadoopfs/name is in an inconsistent state: storage
directory does not exist or is not accessible.

And the namenode starts after "hadoop namenode format".



On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:

> Charles,
>
> Can you check your NN logs to see if it is properly up?
>
> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
> > Hi All,
> > I was running a cluster of one master and 4 slaves. I copied the
> > hadoop_install folder from the master to all 4 slaves, and configured
> them
> > well.
> > How ever when i sh start-all.sh from the master machine. It shows below:
> >
> > starting namenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > master: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> > slave4: starting datanode, logging to
> > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> > slave3: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> > slave1: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> > master: starting secondarynamenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> > starting jobtracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > slave4: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> > master: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> > slave3: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> > slave1: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
> >
> > I know that slave2 is not on. But that should not be the problem. After
> this
> > , I typed 'jps' in the master's shell, and it shows:
> > 6907 Jps
> > 6306 DataNode
> > 6838 TaskTracker
> > 6612 JobTracker
> > 6533 SecondaryNameNode
> >
> > And when I opened this link "localhost:50030",the page said :
> > master Hadoop Map/Reduce Administration
> > Quick Links
> > State: INITIALIZING
> > Started: Mon Aug 27 14:54:46 CST 2012
> > Version: 0.20.2, r911707
> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> > Identifier: 201208271454
> >
> > I don't quite get what the "State : INITIALIZING" means. Additionally, i
> > cannot open "localhost:50070".
> >
> > So, Any suggestions ?
> >
> > Thanks in advance.
> > CH
> > --
> > in a hadoop learning cycle
>
>
>
> --
> Harsh J
>



-- 
in a hadoop learning cycle

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Charles AI <ha...@gmail.com>.
thank you guys.
the logs say my dfs.name.dir is not consistent:
Directory /home/hadoop/hadoopfs/name is in an inconsistent state: storage
directory does not exist or is not accessible.

And the namenode starts after "hadoop namenode format".



On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:

> Charles,
>
> Can you check your NN logs to see if it is properly up?
>
> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
> > Hi All,
> > I was running a cluster of one master and 4 slaves. I copied the
> > hadoop_install folder from the master to all 4 slaves, and configured
> them
> > well.
> > How ever when i sh start-all.sh from the master machine. It shows below:
> >
> > starting namenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > master: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> > slave4: starting datanode, logging to
> > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> > slave3: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> > slave1: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> > master: starting secondarynamenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> > starting jobtracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > slave4: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> > master: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> > slave3: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> > slave1: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
> >
> > I know that slave2 is not on. But that should not be the problem. After
> this
> > , I typed 'jps' in the master's shell, and it shows:
> > 6907 Jps
> > 6306 DataNode
> > 6838 TaskTracker
> > 6612 JobTracker
> > 6533 SecondaryNameNode
> >
> > And when I opened this link "localhost:50030",the page said :
> > master Hadoop Map/Reduce Administration
> > Quick Links
> > State: INITIALIZING
> > Started: Mon Aug 27 14:54:46 CST 2012
> > Version: 0.20.2, r911707
> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> > Identifier: 201208271454
> >
> > I don't quite get what the "State : INITIALIZING" means. Additionally, i
> > cannot open "localhost:50070".
> >
> > So, Any suggestions ?
> >
> > Thanks in advance.
> > CH
> > --
> > in a hadoop learning cycle
>
>
>
> --
> Harsh J
>



-- 
in a hadoop learning cycle

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Charles AI <ha...@gmail.com>.
thank you guys.
the logs say my dfs.name.dir is not consistent:
Directory /home/hadoop/hadoopfs/name is in an inconsistent state: storage
directory does not exist or is not accessible.

And the namenode starts after "hadoop namenode format".



On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:

> Charles,
>
> Can you check your NN logs to see if it is properly up?
>
> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
> > Hi All,
> > I was running a cluster of one master and 4 slaves. I copied the
> > hadoop_install folder from the master to all 4 slaves, and configured
> them
> > well.
> > How ever when i sh start-all.sh from the master machine. It shows below:
> >
> > starting namenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > master: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> > slave4: starting datanode, logging to
> > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> > slave3: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> > slave1: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> > master: starting secondarynamenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> > starting jobtracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > slave4: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> > master: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> > slave3: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> > slave1: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
> >
> > I know that slave2 is not on. But that should not be the problem. After
> this
> > , I typed 'jps' in the master's shell, and it shows:
> > 6907 Jps
> > 6306 DataNode
> > 6838 TaskTracker
> > 6612 JobTracker
> > 6533 SecondaryNameNode
> >
> > And when I opened this link "localhost:50030",the page said :
> > master Hadoop Map/Reduce Administration
> > Quick Links
> > State: INITIALIZING
> > Started: Mon Aug 27 14:54:46 CST 2012
> > Version: 0.20.2, r911707
> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> > Identifier: 201208271454
> >
> > I don't quite get what the "State : INITIALIZING" means. Additionally, i
> > cannot open "localhost:50070".
> >
> > So, Any suggestions ?
> >
> > Thanks in advance.
> > CH
> > --
> > in a hadoop learning cycle
>
>
>
> --
> Harsh J
>



-- 
in a hadoop learning cycle

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Charles AI <ha...@gmail.com>.
Yeah, thank you. Both NN log and DN log on the master machine are empty
files, having a size of 0.

On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:

> Charles,
>
> Can you check your NN logs to see if it is properly up?
>
> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
> > Hi All,
> > I was running a cluster of one master and 4 slaves. I copied the
> > hadoop_install folder from the master to all 4 slaves, and configured
> them
> > well.
> > How ever when i sh start-all.sh from the master machine. It shows below:
> >
> > starting namenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > master: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> > slave4: starting datanode, logging to
> > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> > slave3: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> > slave1: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> > master: starting secondarynamenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> > starting jobtracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > slave4: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> > master: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> > slave3: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> > slave1: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
> >
> > I know that slave2 is not on. But that should not be the problem. After
> this
> > , I typed 'jps' in the master's shell, and it shows:
> > 6907 Jps
> > 6306 DataNode
> > 6838 TaskTracker
> > 6612 JobTracker
> > 6533 SecondaryNameNode
> >
> > And when I opened this link "localhost:50030",the page said :
> > master Hadoop Map/Reduce Administration
> > Quick Links
> > State: INITIALIZING
> > Started: Mon Aug 27 14:54:46 CST 2012
> > Version: 0.20.2, r911707
> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> > Identifier: 201208271454
> >
> > I don't quite get what the "State : INITIALIZING" means. Additionally, i
> > cannot open "localhost:50070".
> >
> > So, Any suggestions ?
> >
> > Thanks in advance.
> > CH
> > --
> > in a hadoop learning cycle
>
>
>
> --
> Harsh J
>



-- 
in a hadoop learning cycle

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Charles AI <ha...@gmail.com>.
Yeah, thank you. Both NN log and DN log on the master machine are empty
files, having a size of 0.

On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:

> Charles,
>
> Can you check your NN logs to see if it is properly up?
>
> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
> > Hi All,
> > I was running a cluster of one master and 4 slaves. I copied the
> > hadoop_install folder from the master to all 4 slaves, and configured
> them
> > well.
> > How ever when i sh start-all.sh from the master machine. It shows below:
> >
> > starting namenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > master: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> > slave4: starting datanode, logging to
> > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> > slave3: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> > slave1: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> > master: starting secondarynamenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> > starting jobtracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > slave4: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> > master: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> > slave3: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> > slave1: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
> >
> > I know that slave2 is not on. But that should not be the problem. After
> this
> > , I typed 'jps' in the master's shell, and it shows:
> > 6907 Jps
> > 6306 DataNode
> > 6838 TaskTracker
> > 6612 JobTracker
> > 6533 SecondaryNameNode
> >
> > And when I opened this link "localhost:50030",the page said :
> > master Hadoop Map/Reduce Administration
> > Quick Links
> > State: INITIALIZING
> > Started: Mon Aug 27 14:54:46 CST 2012
> > Version: 0.20.2, r911707
> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> > Identifier: 201208271454
> >
> > I don't quite get what the "State : INITIALIZING" means. Additionally, i
> > cannot open "localhost:50070".
> >
> > So, Any suggestions ?
> >
> > Thanks in advance.
> > CH
> > --
> > in a hadoop learning cycle
>
>
>
> --
> Harsh J
>



-- 
in a hadoop learning cycle

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Charles AI <ha...@gmail.com>.
Yeah, thank you. Both NN log and DN log on the master machine are empty
files, having a size of 0.

On Mon, Aug 27, 2012 at 3:16 PM, Harsh J <ha...@cloudera.com> wrote:

> Charles,
>
> Can you check your NN logs to see if it is properly up?
>
> On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
> > Hi All,
> > I was running a cluster of one master and 4 slaves. I copied the
> > hadoop_install folder from the master to all 4 slaves, and configured
> them
> > well.
> > How ever when i sh start-all.sh from the master machine. It shows below:
> >
> > starting namenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > master: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> > slave4: starting datanode, logging to
> > /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> > slave3: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> > slave1: starting datanode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> > master: starting secondarynamenode, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> > starting jobtracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> > slave2: ssh: connect to host slave2 port 22: Connection refused
> > slave4: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> > master: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> > slave3: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> > slave1: starting tasktracker, logging to
> >
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
> >
> > I know that slave2 is not on. But that should not be the problem. After
> this
> > , I typed 'jps' in the master's shell, and it shows:
> > 6907 Jps
> > 6306 DataNode
> > 6838 TaskTracker
> > 6612 JobTracker
> > 6533 SecondaryNameNode
> >
> > And when I opened this link "localhost:50030",the page said :
> > master Hadoop Map/Reduce Administration
> > Quick Links
> > State: INITIALIZING
> > Started: Mon Aug 27 14:54:46 CST 2012
> > Version: 0.20.2, r911707
> > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> > Identifier: 201208271454
> >
> > I don't quite get what the "State : INITIALIZING" means. Additionally, i
> > cannot open "localhost:50070".
> >
> > So, Any suggestions ?
> >
> > Thanks in advance.
> > CH
> > --
> > in a hadoop learning cycle
>
>
>
> --
> Harsh J
>



-- 
in a hadoop learning cycle

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Harsh J <ha...@cloudera.com>.
Charles,

Can you check your NN logs to see if it is properly up?

On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
> Hi All,
> I was running a cluster of one master and 4 slaves. I copied the
> hadoop_install folder from the master to all 4 slaves, and configured them
> well.
> How ever when i sh start-all.sh from the master machine. It shows below:
>
> starting namenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> master: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> slave4: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> slave3: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> slave1: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> master: starting secondarynamenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> starting jobtracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> slave4: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> master: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> slave3: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> slave1: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
>
> I know that slave2 is not on. But that should not be the problem. After this
> , I typed 'jps' in the master's shell, and it shows:
> 6907 Jps
> 6306 DataNode
> 6838 TaskTracker
> 6612 JobTracker
> 6533 SecondaryNameNode
>
> And when I opened this link "localhost:50030",the page said :
> master Hadoop Map/Reduce Administration
> Quick Links
> State: INITIALIZING
> Started: Mon Aug 27 14:54:46 CST 2012
> Version: 0.20.2, r911707
> Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> Identifier: 201208271454
>
> I don't quite get what the "State : INITIALIZING" means. Additionally, i
> cannot open "localhost:50070".
>
> So, Any suggestions ?
>
> Thanks in advance.
> CH
> --
> in a hadoop learning cycle



-- 
Harsh J

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Harsh J <ha...@cloudera.com>.
Charles,

Can you check your NN logs to see if it is properly up?

On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
> Hi All,
> I was running a cluster of one master and 4 slaves. I copied the
> hadoop_install folder from the master to all 4 slaves, and configured them
> well.
> How ever when i sh start-all.sh from the master machine. It shows below:
>
> starting namenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> master: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> slave4: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> slave3: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> slave1: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> master: starting secondarynamenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> starting jobtracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> slave4: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> master: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> slave3: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> slave1: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
>
> I know that slave2 is not on. But that should not be the problem. After this
> , I typed 'jps' in the master's shell, and it shows:
> 6907 Jps
> 6306 DataNode
> 6838 TaskTracker
> 6612 JobTracker
> 6533 SecondaryNameNode
>
> And when I opened this link "localhost:50030",the page said :
> master Hadoop Map/Reduce Administration
> Quick Links
> State: INITIALIZING
> Started: Mon Aug 27 14:54:46 CST 2012
> Version: 0.20.2, r911707
> Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> Identifier: 201208271454
>
> I don't quite get what the "State : INITIALIZING" means. Additionally, i
> cannot open "localhost:50070".
>
> So, Any suggestions ?
>
> Thanks in advance.
> CH
> --
> in a hadoop learning cycle



-- 
Harsh J

Re: Why cannot I start namenode or localhost:50070 ?

Posted by TianYi Zhu <ti...@facilitatedigital.com>.
Hi Charles,

map/reduce(jobtracker/tasktrackers, localhost:50030) is based on
hdfs(namenode/datanodes, localhost:50070) or local file system.
It seems there is something wrong with the hdfs, so the map/reduce is
blocked and shows INITIALIZING, please check the log of namenode(
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
).


On Mon, Aug 27, 2012 at 5:03 PM, Charles AI <ha...@gmail.com> wrote:

> Hi All,
> I was running a cluster of one master and 4 slaves. I copied the
> hadoop_install folder from the master to all 4 slaves, and configured them
> well.
> How ever when i sh start-all.sh from the master machine. It shows below:
>
> starting namenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> master: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> slave4: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> slave3: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> slave1: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> master: starting secondarynamenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> starting jobtracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> slave4: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> master: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> slave3: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> slave1: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
>
> I know that slave2 is not on. But that should not be the problem. After
> this , I typed 'jps' in the master's shell, and it shows:
> 6907 Jps
> 6306 DataNode
> 6838 TaskTracker
> 6612 JobTracker
> 6533 SecondaryNameNode
>
> And when I opened this link "localhost:50030",the page said :
> master Hadoop Map/Reduce Administration
>  Quick Links <http://master:50030/jobtracker.jsp#quicklinks>
> *State:* INITIALIZING
> *Started:* Mon Aug 27 14:54:46 CST 2012
> *Version:* 0.20.2, r911707
> *Compiled:* Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> *Identifier:* 201208271454
>
> I don't quite get what the "State : INITIALIZING" means. Additionally, i
> cannot open "localhost:50070".
>
> So, Any suggestions ?
>
> Thanks in advance.
> CH
> --
> in a hadoop learning cycle
>

Re: Why cannot I start namenode or localhost:50070 ?

Posted by TianYi Zhu <ti...@facilitatedigital.com>.
Hi Charles,

map/reduce(jobtracker/tasktrackers, localhost:50030) is based on
hdfs(namenode/datanodes, localhost:50070) or local file system.
It seems there is something wrong with the hdfs, so the map/reduce is
blocked and shows INITIALIZING, please check the log of namenode(
/usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
).


On Mon, Aug 27, 2012 at 5:03 PM, Charles AI <ha...@gmail.com> wrote:

> Hi All,
> I was running a cluster of one master and 4 slaves. I copied the
> hadoop_install folder from the master to all 4 slaves, and configured them
> well.
> How ever when i sh start-all.sh from the master machine. It shows below:
>
> starting namenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> master: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> slave4: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> slave3: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> slave1: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> master: starting secondarynamenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> starting jobtracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> slave4: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> master: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> slave3: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> slave1: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
>
> I know that slave2 is not on. But that should not be the problem. After
> this , I typed 'jps' in the master's shell, and it shows:
> 6907 Jps
> 6306 DataNode
> 6838 TaskTracker
> 6612 JobTracker
> 6533 SecondaryNameNode
>
> And when I opened this link "localhost:50030",the page said :
> master Hadoop Map/Reduce Administration
>  Quick Links <http://master:50030/jobtracker.jsp#quicklinks>
> *State:* INITIALIZING
> *Started:* Mon Aug 27 14:54:46 CST 2012
> *Version:* 0.20.2, r911707
> *Compiled:* Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> *Identifier:* 201208271454
>
> I don't quite get what the "State : INITIALIZING" means. Additionally, i
> cannot open "localhost:50070".
>
> So, Any suggestions ?
>
> Thanks in advance.
> CH
> --
> in a hadoop learning cycle
>

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Harsh J <ha...@cloudera.com>.
Charles,

Can you check your NN logs to see if it is properly up?

On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
> Hi All,
> I was running a cluster of one master and 4 slaves. I copied the
> hadoop_install folder from the master to all 4 slaves, and configured them
> well.
> How ever when i sh start-all.sh from the master machine. It shows below:
>
> starting namenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> master: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> slave4: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> slave3: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> slave1: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> master: starting secondarynamenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> starting jobtracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> slave4: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> master: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> slave3: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> slave1: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
>
> I know that slave2 is not on. But that should not be the problem. After this
> , I typed 'jps' in the master's shell, and it shows:
> 6907 Jps
> 6306 DataNode
> 6838 TaskTracker
> 6612 JobTracker
> 6533 SecondaryNameNode
>
> And when I opened this link "localhost:50030",the page said :
> master Hadoop Map/Reduce Administration
> Quick Links
> State: INITIALIZING
> Started: Mon Aug 27 14:54:46 CST 2012
> Version: 0.20.2, r911707
> Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> Identifier: 201208271454
>
> I don't quite get what the "State : INITIALIZING" means. Additionally, i
> cannot open "localhost:50070".
>
> So, Any suggestions ?
>
> Thanks in advance.
> CH
> --
> in a hadoop learning cycle



-- 
Harsh J

Re: Why cannot I start namenode or localhost:50070 ?

Posted by Harsh J <ha...@cloudera.com>.
Charles,

Can you check your NN logs to see if it is properly up?

On Mon, Aug 27, 2012 at 12:33 PM, Charles AI <ha...@gmail.com> wrote:
> Hi All,
> I was running a cluster of one master and 4 slaves. I copied the
> hadoop_install folder from the master to all 4 slaves, and configured them
> well.
> How ever when i sh start-all.sh from the master machine. It shows below:
>
> starting namenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> master: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-west-desktop.out
> slave4: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop12.out
> slave3: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop-desktop.out
> slave1: starting datanode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-kong-desktop.out
> master: starting secondarynamenode, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-west-desktop.out
> starting jobtracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-west-desktop.out
> slave2: ssh: connect to host slave2 port 22: Connection refused
> slave4: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop12.out
> master: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-west-desktop.out
> slave3: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop-desktop.out
> slave1: starting tasktracker, logging to
> /usr/local/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-kong-desktop.out
>
> I know that slave2 is not on. But that should not be the problem. After this
> , I typed 'jps' in the master's shell, and it shows:
> 6907 Jps
> 6306 DataNode
> 6838 TaskTracker
> 6612 JobTracker
> 6533 SecondaryNameNode
>
> And when I opened this link "localhost:50030",the page said :
> master Hadoop Map/Reduce Administration
> Quick Links
> State: INITIALIZING
> Started: Mon Aug 27 14:54:46 CST 2012
> Version: 0.20.2, r911707
> Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
> Identifier: 201208271454
>
> I don't quite get what the "State : INITIALIZING" means. Additionally, i
> cannot open "localhost:50070".
>
> So, Any suggestions ?
>
> Thanks in advance.
> CH
> --
> in a hadoop learning cycle



-- 
Harsh J