You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Praveen Sripati <pr...@gmail.com> on 2012/01/11 07:54:22 UTC

Fwd: HDFS Federation Exception

Hi,

I am trying to setup a HDFS federation and getting the below error. Also,
pasted the core-site.xml and hdfs-site.xml at the bottom of the mail. Did I
miss something in the configuration files?

2012-01-11 12:12:15,759 ERROR namenode.NameNode (NameNode.java:main(803)) -
Exception in namenode join
java.lang.IllegalArgumentException: Can't parse port ''
        at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:198)
        at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:174)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:228)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:205)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:266)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:317)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:329)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:458)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:450)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:751)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:799)

*core-site.xml*

<?xml version="1.0"?>
<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/praveensripati/tmp/hadoop-0.23.0/tmp</value>
    </property>
</configuration>

*hdfs-site.xml*

<?xml version="1.0"?>
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>
    <property>
        <name>dfs.federation.nameservices</name>
        <value>ns1</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns1</name>
        <value>hdfs://praveen-laptop:9001</value>
      </property>
    <property>
        <name>dfs.namenode.http-address.ns1</name>
        <value>praveen-laptop:50071</value>
    </property>
    <property>
            <name>dfs.namenode.secondaryhttp-address.ns1</name>
        <value>praveen-laptop:50091</value>
    </property>
</configuration>

Regards,
Praveen

Re: HDFS Federation Exception

Posted by Suresh Srinivas <su...@hortonworks.com>.
Thx for filing the jira

Sent from phone

On Jan 11, 2012, at 8:48 AM, Praveen Sripati <pr...@gmail.com> wrote:

> Suresh,
> 
> Here is the JIRA - https://issues.apache.org/jira/browse/HDFS-2778
> 
> Regards,
> Praveen
> 
> On Wed, Jan 11, 2012 at 9:28 PM, Suresh Srinivas <su...@hortonworks.com> wrote:
> Thanks for figuring that. Could you create an HDFS Jira for this issue?
> 
> 
> On Wednesday, January 11, 2012, Praveen Sripati <pr...@gmail.com> wrote:
> > Hi,
> >
> > The documentation (1) suggested to set the `dfs.namenode.rpc-address.ns1` property to `hdfs://nn-host1:rpc-port` in the example. Changing the value to `nn-host1:rpc-port` (removing hdfs://) solved the problem. The document needs to be updated.
> >
> > (1) - http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/Federation.html
> >
> > Praveen
> >
> > On Wed, Jan 11, 2012 at 3:40 PM, Praveen Sripati <pr...@gmail.com> wrote:
> >
> > Hi,
> >
> > Got the latest code to see if any bugs were fixed and did try federation with the same configuration, but was getting similar exception.
> >
> > 2012-01-11 15:25:35,321 ERROR namenode.NameNode (NameNode.java:main(803)) - Exception in namenode join
> > java.io.IOException: Failed on local exception: java.net.SocketException: Unresolved address; Host Details : local host is: "hdfs"; destination host is: "(unknown):0;
> >         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:895)
> >         at org.apache.hadoop.ipc.Server.bind(Server.java:231)
> >         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:313)
> >         at org.apache.hadoop.ipc.Server.<init>(Server.java:1600)
> >         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:576)
> >         at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:322)
> >         at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:282)
> >         at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:46)
> >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550)
> >         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:145)
> >         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:356)
> >         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:334)
> >         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:458)
> >         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:450)
> >         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:751)
> >         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:799)
> > Caused by: java.net.SocketException: Unresolved address
> >         at sun.nio.ch.Net.translateToSocketException(Net.java:58)
> >         at sun.nio.ch.Net.translateException(Net.java:84)
> >         at sun.nio.ch.Net.translateException(Net.java:90)
> >         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:61)
> >         at org.apache.hadoop.ipc.Server.bind(Server.java:229)
> >         ... 14 more
> > Caused by: java.nio.channels.UnresolvedAddressException
> >         at sun.nio.ch.Net.checkAddress(Net.java:30)
> >         at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:122)
> >         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
> >         ... 15 more
> >
> > Regards,
> > Praveen
> >
> > On Wed, Jan 11, 2012 at 12:24 PM, Praveen Sripati <pr...@gmail.com> wrote:
> >
> > Hi,
> >
> > I am trying to setup a HDFS federation and getting the below error. Also, pasted the core-site.xml and hdfs-site.xml at the bottom of the mail. Did I miss something in the configuration files?
> >
> > 2012-01-11 12:12:15,759 ERROR namenode.NameNode (NameNode.java:main(803)) - Exception in namenode join
> > java.lang.IllegalArgumentException: Can't parse port ''
> >         at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:198)
> >         at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
> >         at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:174)
> >         at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:228)
> >         at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:205)
> >         at org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:266)
> >         at org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:317)
> >         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:329)
> >         at org.apache.hadoop.hdfs.server.namenode.N
> 

Re: HDFS Federation Exception

Posted by Praveen Sripati <pr...@gmail.com>.
Suresh,

Here is the JIRA - https://issues.apache.org/jira/browse/HDFS-2778

Regards,
Praveen

On Wed, Jan 11, 2012 at 9:28 PM, Suresh Srinivas <su...@hortonworks.com>wrote:

> Thanks for figuring that. Could you create an HDFS Jira for this issue?
>
>
> On Wednesday, January 11, 2012, Praveen Sripati <pr...@gmail.com>
> wrote:
> > Hi,
> >
> > The documentation (1) suggested to set the
> `dfs.namenode.rpc-address.ns1` property to `hdfs://nn-host1:rpc-port` in
> the example. Changing the value to `nn-host1:rpc-port` (removing hdfs://)
> solved the problem. The document needs to be updated.
> >
> > (1) -
> http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/Federation.html
> >
> > Praveen
> >
> > On Wed, Jan 11, 2012 at 3:40 PM, Praveen Sripati <
> praveensripati@gmail.com> wrote:
> >
> > Hi,
> >
> > Got the latest code to see if any bugs were fixed and did try federation
> with the same configuration, but was getting similar exception.
> >
> > 2012-01-11 15:25:35,321 ERROR namenode.NameNode
> (NameNode.java:main(803)) - Exception in namenode join
> > java.io.IOException: Failed on local exception:
> java.net.SocketException: Unresolved address; Host Details : local host is:
> "hdfs"; destination host is: "(unknown):0;
> >         at
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:895)
> >         at org.apache.hadoop.ipc.Server.bind(Server.java:231)
> >         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:313)
> >         at org.apache.hadoop.ipc.Server.<init>(Server.java:1600)
> >         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:576)
> >         at
> org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:322)
> >         at
> org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:282)
> >         at
> org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:46)
> >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:145)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:356)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:334)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:458)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:450)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:751)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:799)
> > Caused by: java.net.SocketException: Unresolved address
> >         at sun.nio.ch.Net.translateToSocketException(Net.java:58)
> >         at sun.nio.ch.Net.translateException(Net.java:84)
> >         at sun.nio.ch.Net.translateException(Net.java:90)
> >         at
> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:61)
> >         at org.apache.hadoop.ipc.Server.bind(Server.java:229)
> >         ... 14 more
> > Caused by: java.nio.channels.UnresolvedAddressException
> >         at sun.nio.ch.Net.checkAddress(Net.java:30)
> >         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:122)
> >         at
> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
> >         ... 15 more
> >
> > Regards,
> > Praveen
> >
> > On Wed, Jan 11, 2012 at 12:24 PM, Praveen Sripati <
> praveensripati@gmail.com> wrote:
> >
> > Hi,
> >
> > I am trying to setup a HDFS federation and getting the below error.
> Also, pasted the core-site.xml and hdfs-site.xml at the bottom of the mail.
> Did I miss something in the configuration files?
> >
> > 2012-01-11 12:12:15,759 ERROR namenode.NameNode
> (NameNode.java:main(803)) - Exception in namenode join
> > java.lang.IllegalArgumentException: Can't parse port ''
> >         at
> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:198)
> >         at
> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:174)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:228)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:205)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:266)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:317)
> >         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:329)
> >         at org.apache.hadoop.hdfs.server.namenode.N
>

Re: HDFS Federation Exception

Posted by Suresh Srinivas <su...@hortonworks.com>.
Thanks for figuring that. Could you create an HDFS Jira for this issue?

On Wednesday, January 11, 2012, Praveen Sripati <pr...@gmail.com>
wrote:
> Hi,
>
> The documentation (1) suggested to set the `dfs.namenode.rpc-address.ns1`
property to `hdfs://nn-host1:rpc-port` in the example. Changing the value
to `nn-host1:rpc-port` (removing hdfs://) solved the problem. The document
needs to be updated.
>
> (1) -
http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/Federation.html
>
> Praveen
>
> On Wed, Jan 11, 2012 at 3:40 PM, Praveen Sripati <pr...@gmail.com>
wrote:
>
> Hi,
>
> Got the latest code to see if any bugs were fixed and did try federation
with the same configuration, but was getting similar exception.
>
> 2012-01-11 15:25:35,321 ERROR namenode.NameNode (NameNode.java:main(803))
- Exception in namenode join
> java.io.IOException: Failed on local exception: java.net.SocketException:
Unresolved address; Host Details : local host is: "hdfs"; destination host
is: "(unknown):0;
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:895)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:231)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:313)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1600)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:576)
>         at
org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:322)
>         at
org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:282)
>         at
org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:46)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:145)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:356)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:334)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:458)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:450)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:751)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:799)
> Caused by: java.net.SocketException: Unresolved address
>         at sun.nio.ch.Net.translateToSocketException(Net.java:58)
>         at sun.nio.ch.Net.translateException(Net.java:84)
>         at sun.nio.ch.Net.translateException(Net.java:90)
>         at
sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:61)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:229)
>         ... 14 more
> Caused by: java.nio.channels.UnresolvedAddressException
>         at sun.nio.ch.Net.checkAddress(Net.java:30)
>         at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:122)
>         at
sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>         ... 15 more
>
> Regards,
> Praveen
>
> On Wed, Jan 11, 2012 at 12:24 PM, Praveen Sripati <
praveensripati@gmail.com> wrote:
>
> Hi,
>
> I am trying to setup a HDFS federation and getting the below error. Also,
pasted the core-site.xml and hdfs-site.xml at the bottom of the mail. Did I
miss something in the configuration files?
>
> 2012-01-11 12:12:15,759 ERROR namenode.NameNode (NameNode.java:main(803))
- Exception in namenode join
> java.lang.IllegalArgumentException: Can't parse port ''
>         at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:198)
>         at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:174)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:228)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:205)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:266)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:317)
>         at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:329)
>         at org.apache.hadoop.hdfs.server.namenode.N

Re: HDFS Federation Exception

Posted by Praveen Sripati <pr...@gmail.com>.
Hi,

The documentation (1) suggested to set the `dfs.namenode.rpc-address.ns1`
property to `hdfs://nn-host1:rpc-port` in the example. Changing the value
to `nn-host1:rpc-port` (removing hdfs://) solved the problem. The document
needs to be updated.

(1) -
http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/Federation.html

Praveen

On Wed, Jan 11, 2012 at 3:40 PM, Praveen Sripati
<pr...@gmail.com>wrote:

> Hi,
>
> Got the latest code to see if any bugs were fixed and did try federation
> with the same configuration, but was getting similar exception.
>
> 2012-01-11 15:25:35,321 ERROR namenode.NameNode (NameNode.java:main(803))
> - Exception in namenode join
> java.io.IOException: Failed on local exception: java.net.SocketException:
> Unresolved address; Host Details : local host is: "hdfs"; destination host
> is: "(unknown):0;
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:895)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:231)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:313)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1600)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:576)
>         at
> org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:322)
>         at
> org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:282)
>         at
> org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:46)
>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:145)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:356)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:334)
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:458)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:450)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:751)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:799)
> Caused by: java.net.SocketException: Unresolved address
>         at sun.nio.ch.Net.translateToSocketException(Net.java:58)
>         at sun.nio.ch.Net.translateException(Net.java:84)
>         at sun.nio.ch.Net.translateException(Net.java:90)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:61)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:229)
>         ... 14 more
> Caused by: java.nio.channels.UnresolvedAddressException
>         at sun.nio.ch.Net.checkAddress(Net.java:30)
>         at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:122)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>         ... 15 more
>
> Regards,
> Praveen
>
> On Wed, Jan 11, 2012 at 12:24 PM, Praveen Sripati <
> praveensripati@gmail.com> wrote:
>
>>
>> Hi,
>>
>> I am trying to setup a HDFS federation and getting the below error. Also,
>> pasted the core-site.xml and hdfs-site.xml at the bottom of the mail. Did I
>> miss something in the configuration files?
>>
>> 2012-01-11 12:12:15,759 ERROR namenode.NameNode
>> (NameNode.java:main(803)) - Exception in namenode join
>> java.lang.IllegalArgumentException: Can't parse port ''
>>         at
>> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:198)
>>         at
>> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:174)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:228)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:205)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:266)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:317)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:329)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:458)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:450)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:751)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:799)
>>
>> *core-site.xml*
>>
>> <?xml version="1.0"?>
>> <configuration>
>>     <property>
>>         <name>hadoop.tmp.dir</name>
>>         <value>/home/praveensripati/tmp/hadoop-0.23.0/tmp</value>
>>     </property>
>> </configuration>
>>
>> *hdfs-site.xml*
>>
>> <?xml version="1.0"?>
>> <configuration>
>>     <property>
>>         <name>dfs.replication</name>
>>         <value>1</value>
>>     </property>
>>     <property>
>>         <name>dfs.permissions</name>
>>         <value>false</value>
>>     </property>
>>     <property>
>>         <name>dfs.federation.nameservices</name>
>>         <value>ns1</value>
>>     </property>
>>     <property>
>>         <name>dfs.namenode.rpc-address.ns1</name>
>>         <value>hdfs://praveen-laptop:9001</value>
>>       </property>
>>     <property>
>>         <name>dfs.namenode.http-address.ns1</name>
>>         <value>praveen-laptop:50071</value>
>>     </property>
>>     <property>
>>             <name>dfs.namenode.secondaryhttp-address.ns1</name>
>>         <value>praveen-laptop:50091</value>
>>     </property>
>> </configuration>
>>
>> Regards,
>> Praveen
>>
>>
>

Re: HDFS Federation Exception

Posted by Praveen Sripati <pr...@gmail.com>.
Hi,

Got the latest code to see if any bugs were fixed and did try federation
with the same configuration, but was getting similar exception.

2012-01-11 15:25:35,321 ERROR namenode.NameNode (NameNode.java:main(803)) -
Exception in namenode join
java.io.IOException: Failed on local exception: java.net.SocketException:
Unresolved address; Host Details : local host is: "hdfs"; destination host
is: "(unknown):0;
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:895)
        at org.apache.hadoop.ipc.Server.bind(Server.java:231)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:313)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:1600)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:576)
        at
org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:322)
        at
org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:282)
        at
org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:46)
        at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:145)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:356)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:334)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:458)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:450)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:751)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:799)
Caused by: java.net.SocketException: Unresolved address
        at sun.nio.ch.Net.translateToSocketException(Net.java:58)
        at sun.nio.ch.Net.translateException(Net.java:84)
        at sun.nio.ch.Net.translateException(Net.java:90)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:61)
        at org.apache.hadoop.ipc.Server.bind(Server.java:229)
        ... 14 more
Caused by: java.nio.channels.UnresolvedAddressException
        at sun.nio.ch.Net.checkAddress(Net.java:30)
        at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:122)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
        ... 15 more

Regards,
Praveen

On Wed, Jan 11, 2012 at 12:24 PM, Praveen Sripati
<pr...@gmail.com>wrote:

>
> Hi,
>
> I am trying to setup a HDFS federation and getting the below error. Also,
> pasted the core-site.xml and hdfs-site.xml at the bottom of the mail. Did I
> miss something in the configuration files?
>
> 2012-01-11 12:12:15,759 ERROR namenode.NameNode (NameNode.java:main(803))
> - Exception in namenode join
> java.lang.IllegalArgumentException: Can't parse port ''
>         at
> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:198)
>         at
> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:174)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:228)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:205)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:266)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:317)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:329)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:458)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:450)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:751)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:799)
>
> *core-site.xml*
>
> <?xml version="1.0"?>
> <configuration>
>     <property>
>         <name>hadoop.tmp.dir</name>
>         <value>/home/praveensripati/tmp/hadoop-0.23.0/tmp</value>
>     </property>
> </configuration>
>
> *hdfs-site.xml*
>
> <?xml version="1.0"?>
> <configuration>
>     <property>
>         <name>dfs.replication</name>
>         <value>1</value>
>     </property>
>     <property>
>         <name>dfs.permissions</name>
>         <value>false</value>
>     </property>
>     <property>
>         <name>dfs.federation.nameservices</name>
>         <value>ns1</value>
>     </property>
>     <property>
>         <name>dfs.namenode.rpc-address.ns1</name>
>         <value>hdfs://praveen-laptop:9001</value>
>       </property>
>     <property>
>         <name>dfs.namenode.http-address.ns1</name>
>         <value>praveen-laptop:50071</value>
>     </property>
>     <property>
>             <name>dfs.namenode.secondaryhttp-address.ns1</name>
>         <value>praveen-laptop:50091</value>
>     </property>
> </configuration>
>
> Regards,
> Praveen
>
>