You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by ZhiHong Fu <dd...@gmail.com> on 2008/11/08 10:59:50 UTC

hadoop start problem

Hi:


         I have encountered a strange problem,. I  installed hadoop 0.15.2
on my computer servral month ago. but now I want to upgrade to 0.18.1, I
delete the directory
hadoop-0.15.2 and copy the hadoop 0.18.1 , and did some very simple
configuration following the hadoop pre-sudo distributed tutorial and
modified the hadoop-site.xml :

<configuration>
    <property>
          <name>fs.default.name</name>
          <value>localhost:9000</value>
   </property>
  <property>
          <name>mapred.job.tracker</name>
          <value>localhost:9001</value>
  </property>
  <property>
         <name>dfs.replication</name>
         <value>1</value>
   </property>
</configuration>

I can run the bin/hadoop namenode -format , but when i run the command
bin/start-dfs.sh , It will throw the null pointer exception.

localhost: Exception in thread "main" java.lang.NullPointerException
localhost:      at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:130)
localhost:      at
org.apache.hadoop.dfs.NameNode.getAddress(NameNode.java:116)
localhost:      at
org.apache.hadoop.dfs.NameNode.getAddress(NameNode.java:120)
localhost:      at
org.apache.hadoop.dfs.*SecondaryNameNode*.initialize(*SecondaryNameNode*.java:124)

localhost:      at
org.apache.hadoop.dfs.*SecondaryNameNode*.<init>(*SecondaryNameNode*.java:108)

localhost:      at
org.apache.hadoop.dfs.*SecondaryNameNode*.main(*SecondaryNameNode*.java:460)


I don't know why, I can run hadoop-0.15.2 very well

Re: hadoop start problem

Posted by Allen Wittenauer <aw...@yahoo-inc.com>.
On 11/10/08 6:18 AM, "Brian MacKay" <Br...@MEDecision.com> wrote:
> I had a similar problem when I upgraded...  not sure of details why, but
> I had permissions problems trying to develop and run on windows out of
> cygwin.

    At Apachecon, we think we identified a case where someone forgot to copy
the newer hadoop-defaults.xml into their old configuration directories that
they were using post-upgrade.  Hadoop acts really strangely under those
conditions.


RE: hadoop start problem

Posted by Brian MacKay <Br...@MEDecision.com>.
I had a similar problem when I upgraded...  not sure of details why, but
I had permissions problems trying to develop and run on windows out of
cygwin.


I found that in cygwin if I ran under my account, I got the null pointer
exception, but if I shh localhost first, then format the name node,
Hadoop ran as SYSTEM and initialized and ran properly.

Somehow your permissions are likely the problem......





-----Original Message-----
From: Aaron Kimball [mailto:aaron@cloudera.com] 
Sent: Monday, November 10, 2008 4:39 AM
To: core-user@hadoop.apache.org
Subject: Re: hadoop start problem

Between 0.15 and 0.18 the format for fs.default.name has changed; you
should
set the value there as "hdfs://localhost:9000/" without the quotes.

It still shouldn't give you a NPE (that should probably get a JIRA
entry)
under any circumstances, but putting a value in the (new) proper format
might get your system working.

- Aaron

On Sat, Nov 8, 2008 at 1:59 AM, ZhiHong Fu <dd...@gmail.com> wrote:

> Hi:
>
>
>         I have encountered a strange problem,. I  installed hadoop
0.15.2
> on my computer servral month ago. but now I want to upgrade to 0.18.1,
I
> delete the directory
> hadoop-0.15.2 and copy the hadoop 0.18.1 , and did some very simple
> configuration following the hadoop pre-sudo distributed tutorial and
> modified the hadoop-site.xml :
>
> <configuration>
>    <property>
>          <name>fs.default.name</name>
>          <value>localhost:9000</value>
>   </property>
>  <property>
>          <name>mapred.job.tracker</name>
>          <value>localhost:9001</value>
>  </property>
>  <property>
>         <name>dfs.replication</name>
>         <value>1</value>
>   </property>
> </configuration>
>
> I can run the bin/hadoop namenode -format , but when i run the command
> bin/start-dfs.sh , It will throw the null pointer exception.
>
> localhost: Exception in thread "main" java.lang.NullPointerException
> localhost:      at
> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:130)
> localhost:      at
> org.apache.hadoop.dfs.NameNode.getAddress(NameNode.java:116)
> localhost:      at
> org.apache.hadoop.dfs.NameNode.getAddress(NameNode.java:120)
> localhost:      at
>
>
org.apache.hadoop.dfs.*SecondaryNameNode*.initialize(*SecondaryNameNode*
.java:124)
>
> localhost:      at
>
>
org.apache.hadoop.dfs.*SecondaryNameNode*.<init>(*SecondaryNameNode*.jav
a:108)
>
> localhost:      at
>
>
org.apache.hadoop.dfs.*SecondaryNameNode*.main(*SecondaryNameNode*.java:
460)
>
>
> I don't know why, I can run hadoop-0.15.2 very well
>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

The information transmitted is intended only for the person or entity to 
which it is addressed and may contain confidential and/or privileged 
material. Any review, retransmission, dissemination or other use of, or 
taking of any action in reliance upon, this information by persons or 
entities other than the intended recipient is prohibited. If you received 
this message in error, please contact the sender and delete the material 
from any computer.



Re: hadoop start problem

Posted by Aaron Kimball <aa...@cloudera.com>.
Between 0.15 and 0.18 the format for fs.default.name has changed; you should
set the value there as "hdfs://localhost:9000/" without the quotes.

It still shouldn't give you a NPE (that should probably get a JIRA entry)
under any circumstances, but putting a value in the (new) proper format
might get your system working.

- Aaron

On Sat, Nov 8, 2008 at 1:59 AM, ZhiHong Fu <dd...@gmail.com> wrote:

> Hi:
>
>
>         I have encountered a strange problem,. I  installed hadoop 0.15.2
> on my computer servral month ago. but now I want to upgrade to 0.18.1, I
> delete the directory
> hadoop-0.15.2 and copy the hadoop 0.18.1 , and did some very simple
> configuration following the hadoop pre-sudo distributed tutorial and
> modified the hadoop-site.xml :
>
> <configuration>
>    <property>
>          <name>fs.default.name</name>
>          <value>localhost:9000</value>
>   </property>
>  <property>
>          <name>mapred.job.tracker</name>
>          <value>localhost:9001</value>
>  </property>
>  <property>
>         <name>dfs.replication</name>
>         <value>1</value>
>   </property>
> </configuration>
>
> I can run the bin/hadoop namenode -format , but when i run the command
> bin/start-dfs.sh , It will throw the null pointer exception.
>
> localhost: Exception in thread "main" java.lang.NullPointerException
> localhost:      at
> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:130)
> localhost:      at
> org.apache.hadoop.dfs.NameNode.getAddress(NameNode.java:116)
> localhost:      at
> org.apache.hadoop.dfs.NameNode.getAddress(NameNode.java:120)
> localhost:      at
>
> org.apache.hadoop.dfs.*SecondaryNameNode*.initialize(*SecondaryNameNode*.java:124)
>
> localhost:      at
>
> org.apache.hadoop.dfs.*SecondaryNameNode*.<init>(*SecondaryNameNode*.java:108)
>
> localhost:      at
>
> org.apache.hadoop.dfs.*SecondaryNameNode*.main(*SecondaryNameNode*.java:460)
>
>
> I don't know why, I can run hadoop-0.15.2 very well
>