You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Michael Scott <mj...@gmail.com> on 2010/09/14 07:16:44 UTC

hbase standalone cannot start master, cannot assign requested address at port 60000

Hi,

I am trying to install a standalone hbase server on Fedora Core 11.  I have
hadoop running:

bash-4.0$ jps
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
30987 TaskTracker
31137 Jps

The only edit I have made to the hbase-0.20.6 directory from the tarball is
to point to the Java installation (the same as used by hadoop):
export JAVA_HOME=/usr/lib/jvm/java-1.6.0-sun/

I have verified sshd passwordless login for hadoop for all variations of the
hostname (localhost, qualifiedname.com, www.qualifiedname.com, straight IP
address), and have added the qualified hostnames to /etc/hosts just to be
sure.

When I attempt to start the hbase server with start-hbase.sh (as hadoop) the
following appears in the log file:

2010-09-14 00:36:45,555 INFO org.apache.hadoop.hbase.master.HMaster: My
address is qualifiedname.com:60000
2010-09-14 00:36:45,682 ERROR org.apache.hadoop.hbase.master.HMaster: Can
not start master
java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot
assign requested address
        at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:179)
        at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>(HBaseServer.java:242)
        at
org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:998)
        at
org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>(HBaseRPC.java:637)
        at org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:596)
        at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:224)
        at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
        at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
        at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind(Native Method)
        at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
        at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:177)
        ... 9 more

At this point zookeeper is apparently running, but hbase master is not:
bash-4.0$ jps
31454 HQuorumPeer
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
31670 Jps
30987 TaskTracker

I am stumped -- the documentation simply says that the standalone server
should work out of the box, and it would seem to me that  hadoop.  Does
anyone have any suggestions here?  Thanks in advance!

Michael

Michael

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Michael Scott <mj...@gmail.com>.
THANK YOU.  It is now listening on port 60000

Michael

On Thu, Sep 16, 2010 at 3:51 PM, Ryan Rawson <ry...@gmail.com> wrote:

> Hey,
>
> Ok the picture is all clear.
>
> So HBase is a minimally configured system... You dont want to specify
> the bind address in your config file, because usually you have 1 file
> that you distribute to dozens or even potentially hundreds of systems.
>  So specifying configuration for 1 system is just not really the way
> to go with clustered software.
>
> So what does hbase do?  We need to know the node's identity so when we
> register ourselves we know what our IP is, and that IP goes into the
> META table.  So we grab the hostname (as per 'hostname' on most
> systems).  Then reverse DNS it, use that IP to bind to.
>
> In this case, the problem is your hostname is reversing to the
> external IP which your host doesnt actually have an interface to.  If
> you want to run internal network services behind a NAT you will need
> to have local IPs and hostnames and not reuse your external name/IP as
> internal hostnames.
>
> So, change your hostname to 'myhost' and make sure it DNS reverses to
> 10.0.0.2 (your real IP) and you should be off to the races.
>
> -ryan
>
> On Thu, Sep 16, 2010 at 1:06 PM, Michael Scott <mj...@gmail.com>
> wrote:
> > Thanks again.  This changes the behavior, but it does not yet fix my
> > problem.  The hbase.rootdir property forces the hbase master to stay
> alive
> > for a little while, so I had a moment of short-lived euphoria when
> Hmaster
> > appeared in the jps list, but this only lasts while it tries to connect
> to
> > localhost:9000 (which is not open), and it still doesn't open port 60000
> and
> > it still thinks it is named my-static-ip.com (i.e., same error message
> as
> > before).  The removal of localhost.localdomain from /etc/hosts made no
> > difference one way or the other.  I still am looking for a way to try to
> > have hbase bind to localhost:6000 instead of my-static-ip.com:6000.  I
> will
> > also try to see why localhost:9000 is not open (though that appears later
> in
> > the log file, so I don't think it is causing the failure to open 60000).
> > Thanks for the help so far, I will post again with further info.
> >
> > Michael
> >
> > On Thu, Sep 16, 2010 at 12:53 PM, N.N. Gesli <nn...@gmail.com> wrote:
> >
> >> I have this in hbase-site.xml:
> >>
> >>  <property>
> >>    <name>hbase.rootdir</name>
> >>    <value>hdfs://localhost:9000/hbase</value>
> >>    <description>The directory shared by region servers.
> >>    Should be fully-qualified to include the filesystem to use.
> >>    E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
> >>    </description>
> >>  </property>
> >>
> >>  <property>
> >>    <name>hbase.cluster.distributed</name>
> >>    <value>true</value>
> >>    <description>For psuedo-distributed, you want to set this to true.
> >>    false means that HBase tries to put Master + RegionServers in one
> >> process.
> >>    Pseudo-distributed = seperate processes/pids</description>
> >>  </property>
> >>
> >>  <property>
> >>    <name>hbase.regionserver.hlog.replication</name>
> >>    <value>1</value>
> >>    <description>For HBase to offer good data durability, we roll logs if
> >>    filesystem replication falls below a certain amount.  In
> >> psuedo-distributed
> >>    mode, you normally only have the local filesystem or 1 HDFS DataNode,
> so
> >> you
> >>    don't want to roll logs constantly.</description>
> >>  </property>
> >>
> >>  <property>
> >>    <name>hbase.tmp.dir</name>
> >>    <value>/tmp/hbase-testing</value>
> >>    <description>Temporary directory on the local
> filesystem.</description>
> >>  </property>
> >>
> >> I also hase Hadoop conf directory in HBASE_CLASSPATH (hbase-env.sh).
> >>
> >> I just tried etc/hosts with "127.0.0.1
> localhost.localdomain
> >> localhost" line. I got the same error I was getting before. I switched
> it
> >> back to "127.0.0.1       localhost" and it worked. In between those
> >> changes,
> >> I stopped hbase, hadoop and killed still running region server. I hope
> that
> >> helps.
> >>
> >> N.Gesli
> >>
> >>
> >>
> >>
> >> On Thu, Sep 16, 2010 at 7:04 AM, Michael Scott <mj...@gmail.com>
> >> wrote:
> >>
> >> > This sounds promising, I have one quick question about your steps:
>  where
> >> > in
> >> > the Hbase config *site*.xml did you make the change back to localhost?
> >>  My
> >> > hbase master is using the public IP address (97.86.88.18), and I don't
> >> > think
> >> > I've told it to.  I want to convince hbase to get rid of the line in
> the
> >> > log
> >> > file that says something like:
> >> >
> >> > 2010-09-16 09:59:21,727 INFO org.apache.hadoop.hbase.master.HMaster:
> My
> >> > address is 97-86-88-18.static.aldl.mi.charter.com:60000
> >> >
> >> > (Note that my /etc/hosts has only the one line
> >> > 127.0.0.1               localhost.localdomain localhost
> >> > since I'm not running ipv6, but somehow hbase knows that the interface
> is
> >> a
> >> > comcast static address.  I can use /etc/hosts to change that to the
> >> > registered domain name for 97-86-88-18, but this doesn't help.)
> >> >
> >> > To reply to Ryan's question, my ifconfig gives:
> >> >
> >> > eth0      Link encap:Ethernet  HWaddr 00:24:E8:01:DA:B8
> >> >          inet addr:10.0.0.2  Bcast:10.0.0.255  Mask:255.255.255.0
> >> >          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> >> >          RX packets:319475 errors:0 dropped:0 overruns:0 frame:0
> >> >          TX packets:290698 errors:0 dropped:0 overruns:0 carrier:0
> >> >          collisions:0 txqueuelen:1000
> >> >          RX bytes:108186958 (103.1 MiB)  TX bytes:187845633 (179.1
> MiB)
> >> >          Interrupt:28 Base address:0xa000
> >> >
> >> > lo        Link encap:Local Loopback
> >> >          inet addr:127.0.0.1  Mask:255.0.0.0
> >> >          UP LOOPBACK RUNNING  MTU:16436  Metric:1
> >> >          RX packets:370795 errors:0 dropped:0 overruns:0 frame:0
> >> >          TX packets:370795 errors:0 dropped:0 overruns:0 carrier:0
> >> >          collisions:0 txqueuelen:0
> >> >          RX bytes:108117402 (103.1 MiB)  TX bytes:108117402 (103.1
> MiB)
> >> >
> >> > Thanks a bunch!
> >> >
> >> > Michael
> >> >
> >> > On Thu, Sep 16, 2010 at 1:12 AM, N.N. Gesli <nn...@gmail.com>
> wrote:
> >> >
> >> > > Hi Michael,
> >> > >
> >> > > I was having a similar problem and following this thread for any
> >> > > suggestions. I tried everything suggested and more.
> >> > >
> >> > > I was trying to run Hadoop/Hbase pseudo distributed version on my
> Mac.
> >> I
> >> > > initially started with Hadoop 21.0 and Hbase 0.89 versions. I had
> >> exactly
> >> > > the same error that you were getting. Then switched to Hadoop 20.2
> and
> >> > > Hbase
> >> > > 20.6 - still HMaster was not starting. Then finally it worked. Below
> >> are
> >> > my
> >> > > steps to success :)
> >> > >
> >> > > * stopped hbase
> >> > > * stopped hadoop
> >> > > * run jps; RegionServer was still running; killed it manually
> >> > > * in tmp directory (where hadoop namenode and *.pid files are
> stored) I
> >> > > removed everything related to hadoop and hbase, including the
> >> > directories.
> >> > > (I had no data in Hadoop, so I could do this)
> >> > > * changed the ports back to default 600**
> >> > > * changed back Hadoop and Hbase configurations to "localhost" in
> >> > *site*.xml
> >> > > and regionservers. (Only I will be using this - no remote
> connection)
> >> > > * changed back my /etc/hosts to the original version. It looks like
> >> this:
> >> > > 127.0.0.1    localhost
> >> > > ::1             localhost
> >> > > fe80::1%lo0    localhost
> >> > > * reformatted the Hadoop namenode
> >> > > * started Hadoop
> >> > > * started HBase and it worked :)
> >> > >
> >> > > Let me know if you want to know any specific configuration.
> >> > >
> >> > > N.Gesli
> >> > >
> >> > > On Wed, Sep 15, 2010 at 10:41 PM, Ryan Rawson <ry...@gmail.com>
> >> > wrote:
> >> > >
> >> > > > What is your ifconfig output looking like?
> >> > > >
> >> > > >
> >> > > >
> >> > > > On Wed, Sep 15, 2010 at 10:07 PM, Michael Scott <
> >> mjscottuic@gmail.com>
> >> > > > wrote:
> >> > > > > Thanks for the continued advice.  I am still confused by the
> >> > different
> >> > > > > behaviors of hadoop and hbase. As I said before, I can't get
> hbase
> >> to
> >> > > > work
> >> > > > > on any of the ports that hadoop works on, so I guess hadoop and
> >> hbase
> >> > > are
> >> > > > > using different interfaces.  Why is this, and can't I ask hbase
> to
> >> > use
> >> > > > the
> >> > > > > interface that hadoop uses?  What interfaces are hadoop and
> hbase
> >> > > using?
> >> > > > >
> >> > > > > Also (and maybe this is the wrong forum for this question), how
> can
> >> I
> >> > > get
> >> > > > my
> >> > > > > OS to allow me to open 60000 using the IP address?  I have
> >> > temporarily
> >> > > > > disabled selinux and iptables, as I thought that this would
> simply
> >> > > allow
> >> > > > all
> >> > > > > port connections. Still, this works just fine:
> >> > > > > bash-4.0$ nc -l  60000 > /tmp/nc.out
> >> > > > >
> >> > > > > but this does not:
> >> > > > > bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
> >> > > > > (returns "nc: Cannot assign requested address"; I get the same
> >> error
> >> > > for
> >> > > > the
> >> > > > > hostname instead of the IP address, and for 10.0.0.1, but
> 10.0.0.0
> >> is
> >> > > > > allowed)
> >> > > > >
> >> > > > > I am trying to get hbase running for a socorro server, which
> will
> >> > > running
> >> > > > > locally.  I don't know if that matters.
> >> > > > >
> >> > > > > Thanks,
> >> > > > > Michael
> >> > > > >
> >> > > > > On Wed, Sep 15, 2010 at 6:04 PM, Ryan Rawson <
> ryanobjc@gmail.com>
> >> > > wrote:
> >> > > > >
> >> > > > >> Hey,
> >> > > > >>
> >> > > > >> If you bind to localhost you wont actually be reachable by
> anyone!
> >> > > > >>
> >> > > > >> The question is why is your OS disallowing binds to a specific
> >> > > > >> interface/port combo?
> >> > > > >>
> >> > > > >> HBase does not really run in a blended/multihomed
> environment...
> >> > > > >> meaning if you have multiple interfaces, you have to choose one
> >> that
> >> > > > >> we work over.  This is because we need to know a singular
> >> canonical
> >> > > > >> IP/name for any given server because we put that info up inside
> >> > > > >> ZooKeeper and META tables.  So it's not just an artificial
> >> > constraint,
> >> > > > >> but exists for cluster management needs.
> >> > > > >>
> >> > > > >> Having said that, we do work on multihomed machines, eg: ec2,
> you
> >> > > > >> might bind hbase to the internal interface taking advantage of
> the
> >> > > > >> unmetered/faster network. Also better for security as well.
> >> > > > >>
> >> > > > >> Let us know if you need more background on how we use the
> network
> >> > and
> >> > > > why.
> >> > > > >> -ryan
> >> > > > >>
> >> > > > >> On Wed, Sep 15, 2010 at 10:18 AM, Michael Scott <
> >> > mjscottuic@gmail.com
> >> > > >
> >> > > > >> wrote:
> >> > > > >> > Hi again,
> >> > > > >> >
> >> > > > >> > I think the hbase server master is not starting because it is
> >> > > > attempting
> >> > > > >> to
> >> > > > >> > open port 60000 on its public IP address, rather than using
> >> > > localhost.
> >> > > >  I
> >> > > > >> > cannot seem to figure out how to force it (well, configure
> it)
> >> to
> >> > > > attempt
> >> > > > >> to
> >> > > > >> > bind to localhost:60000 instead.  As far as I can see,  this
> is
> >> > set
> >> > > in
> >> > > > >> the
> >> > > > >> > file:
> >> > > > >> >
> >> > > > >> > org/apache/hadoop/hbase/master/HMaster.java
> >> > > > >> >
> >> > > > >> > I don't know much about java, so I'd prefer not to edit the
> >> source
> >> > > if
> >> > > > >> there
> >> > > > >> > is an option, but I will if necessary.  Can someone please
> point
> >> > me
> >> > > to
> >> > > > >> the
> >> > > > >> > way to change this setting?  Any help would be greatly
> >> > appreciated.
> >> > > > >> >
> >> > > > >> > Thanks,
> >> > > > >> > Michael
> >> > > > >> >
> >> > > > >> > On Wed, Sep 15, 2010 at 12:42 AM, Michael Scott <
> >> > > mjscottuic@gmail.com
> >> > > > >> >wrote:
> >> > > > >> >
> >> > > > >> >> Hi again,
> >> > > > >> >>
> >> > > > >> >> IPV6 was enabled.  I shut it off, rebooted to be sure,
> verified
> >> > it
> >> > > > was
> >> > > > >> >> still off, and encountered the same problem once again.
> >> > > > >> >>
> >> > > > >> >> I also tried to open port 60000 by hand with a small php
> file.
> >>  I
> >> > > can
> >> > > > do
> >> > > > >> >> this (as any user) for localhost.  I can NOT do this (not
> even
> >> as
> >> > > > root)
> >> > > > >> for
> >> > > > >> >> the IP address which matches the fully qualified domain
> name,
> >> > which
> >> > > > is
> >> > > > >> what
> >> > > > >> >> hbase is trying to use.  Is there some way for me to
> configure
> >> > > hbase
> >> > > > to
> >> > > > >> use
> >> > > > >> >> localhost instead of the fully qualified domain name for the
> >> > > master?
> >> > > >  I
> >> > > > >> >> would have thought this was done by default, or that there
> >> would
> >> > be
> >> > > > an
> >> > > > >> >> obvious line in some conf file, but I can't find it.
> >> > > > >> >>
> >> > > > >> >> Thanks again,
> >> > > > >> >>
> >> > > > >> >> Michael
> >> > > > >> >>
> >> > > > >> >> On Tue, Sep 14, 2010 at 12:23 PM, Todd Lipcon <
> >> todd@cloudera.com
> >> > >
> >> > > > >> wrote:
> >> > > > >> >>
> >> > > > >> >>> Hi Michael,
> >> > > > >> >>>
> >> > > > >> >>> It might be related to IPV6. Do you have IPV6 enabled on
> this
> >> > > > machine?
> >> > > > >> >>>
> >> > > > >> >>> Check out this hadoop JIRA that might be related for some
> >> tips:
> >> > > > >> >>> https://issues.apache.org/jira/browse/HADOOP-6056
> >> > > > >> >>>
> >> > > > >> >>> <https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
> >> > > > >> >>>
> >> > > > >> >>> On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <
> >> > > > mjscottuic@gmail.com
> >> > > > >> >>> >wrote:
> >> > > > >> >>>
> >> > > > >> >>> > That's correct.  I tried a number of different ports to
> see
> >> if
> >> > > > there
> >> > > > >> was
> >> > > > >> >>> > something weird, and then I shut down the hadoop server
> and
> >> > > tried
> >> > > > to
> >> > > > >> >>> > connect
> >> > > > >> >>> > to 50010 (which of course should have been free at that
> >> point)
> >> > > but
> >> > > > >> got
> >> > > > >> >>> the
> >> > > > >> >>> > same "cannot assign to requested address" error.  If I
> start
> >> > > > hadoop,
> >> > > > >> >>> > netstat
> >> > > > >> >>> > shows a process listening on 50010.
> >> > > > >> >>> >
> >> > > > >> >>> > I am going to try this on a different OS, I am wondering
> if
> >> > FC11
> >> > > > is
> >> > > > >> my
> >> > > > >> >>> > problem.
> >> > > > >> >>> >
> >> > > > >> >>> > Michael
> >> > > > >> >>> >
> >> > > > >> >>> > On Tue, Sep 14, 2010 at 11:41 AM, Stack <
> stack@duboce.net>
> >> > > wrote:
> >> > > > >> >>> >
> >> > > > >> >>> > > On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
> >> > > > >> mjscottuic@gmail.com>
> >> > > > >> >>> > > wrote:
> >> > > > >> >>> > > > I don't see why hadoop binds
> >> > > > >> >>> > > > to a port but hbase does not (I even tried starting
> >> hbase
> >> > > with
> >> > > > >> >>> hadoop
> >> > > > >> >>> > off
> >> > > > >> >>> > > > and binding to 50010, which hadoop uses).
> >> > > > >> >>> > > >
> >> > > > >> >>> > >
> >> > > > >> >>> > > Using 50010 worked for hadoop but not for hbase?  (Odd.
> >>  We
> >> > > > hadoop
> >> > > > >> >>> > > their mechanism essentially).
> >> > > > >> >>> > >
> >> > > > >> >>> > > St.Ack
> >> > > > >> >>> > >
> >> > > > >> >>> >
> >> > > > >> >>>
> >> > > > >> >>>
> >> > > > >> >>>
> >> > > > >> >>> --
> >> > > > >> >>> Todd Lipcon
> >> > > > >> >>> Software Engineer, Cloudera
> >> > > > >> >>>
> >> > > > >> >>
> >> > > > >> >>
> >> > > > >> >
> >> > > > >>
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> >
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Ryan Rawson <ry...@gmail.com>.
Hey,

Ok the picture is all clear.

So HBase is a minimally configured system... You dont want to specify
the bind address in your config file, because usually you have 1 file
that you distribute to dozens or even potentially hundreds of systems.
 So specifying configuration for 1 system is just not really the way
to go with clustered software.

So what does hbase do?  We need to know the node's identity so when we
register ourselves we know what our IP is, and that IP goes into the
META table.  So we grab the hostname (as per 'hostname' on most
systems).  Then reverse DNS it, use that IP to bind to.

In this case, the problem is your hostname is reversing to the
external IP which your host doesnt actually have an interface to.  If
you want to run internal network services behind a NAT you will need
to have local IPs and hostnames and not reuse your external name/IP as
internal hostnames.

So, change your hostname to 'myhost' and make sure it DNS reverses to
10.0.0.2 (your real IP) and you should be off to the races.

-ryan

On Thu, Sep 16, 2010 at 1:06 PM, Michael Scott <mj...@gmail.com> wrote:
> Thanks again.  This changes the behavior, but it does not yet fix my
> problem.  The hbase.rootdir property forces the hbase master to stay alive
> for a little while, so I had a moment of short-lived euphoria when Hmaster
> appeared in the jps list, but this only lasts while it tries to connect to
> localhost:9000 (which is not open), and it still doesn't open port 60000 and
> it still thinks it is named my-static-ip.com (i.e., same error message as
> before).  The removal of localhost.localdomain from /etc/hosts made no
> difference one way or the other.  I still am looking for a way to try to
> have hbase bind to localhost:6000 instead of my-static-ip.com:6000.  I will
> also try to see why localhost:9000 is not open (though that appears later in
> the log file, so I don't think it is causing the failure to open 60000).
> Thanks for the help so far, I will post again with further info.
>
> Michael
>
> On Thu, Sep 16, 2010 at 12:53 PM, N.N. Gesli <nn...@gmail.com> wrote:
>
>> I have this in hbase-site.xml:
>>
>>  <property>
>>    <name>hbase.rootdir</name>
>>    <value>hdfs://localhost:9000/hbase</value>
>>    <description>The directory shared by region servers.
>>    Should be fully-qualified to include the filesystem to use.
>>    E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
>>    </description>
>>  </property>
>>
>>  <property>
>>    <name>hbase.cluster.distributed</name>
>>    <value>true</value>
>>    <description>For psuedo-distributed, you want to set this to true.
>>    false means that HBase tries to put Master + RegionServers in one
>> process.
>>    Pseudo-distributed = seperate processes/pids</description>
>>  </property>
>>
>>  <property>
>>    <name>hbase.regionserver.hlog.replication</name>
>>    <value>1</value>
>>    <description>For HBase to offer good data durability, we roll logs if
>>    filesystem replication falls below a certain amount.  In
>> psuedo-distributed
>>    mode, you normally only have the local filesystem or 1 HDFS DataNode, so
>> you
>>    don't want to roll logs constantly.</description>
>>  </property>
>>
>>  <property>
>>    <name>hbase.tmp.dir</name>
>>    <value>/tmp/hbase-testing</value>
>>    <description>Temporary directory on the local filesystem.</description>
>>  </property>
>>
>> I also hase Hadoop conf directory in HBASE_CLASSPATH (hbase-env.sh).
>>
>> I just tried etc/hosts with "127.0.0.1               localhost.localdomain
>> localhost" line. I got the same error I was getting before. I switched it
>> back to "127.0.0.1       localhost" and it worked. In between those
>> changes,
>> I stopped hbase, hadoop and killed still running region server. I hope that
>> helps.
>>
>> N.Gesli
>>
>>
>>
>>
>> On Thu, Sep 16, 2010 at 7:04 AM, Michael Scott <mj...@gmail.com>
>> wrote:
>>
>> > This sounds promising, I have one quick question about your steps:  where
>> > in
>> > the Hbase config *site*.xml did you make the change back to localhost?
>>  My
>> > hbase master is using the public IP address (97.86.88.18), and I don't
>> > think
>> > I've told it to.  I want to convince hbase to get rid of the line in the
>> > log
>> > file that says something like:
>> >
>> > 2010-09-16 09:59:21,727 INFO org.apache.hadoop.hbase.master.HMaster: My
>> > address is 97-86-88-18.static.aldl.mi.charter.com:60000
>> >
>> > (Note that my /etc/hosts has only the one line
>> > 127.0.0.1               localhost.localdomain localhost
>> > since I'm not running ipv6, but somehow hbase knows that the interface is
>> a
>> > comcast static address.  I can use /etc/hosts to change that to the
>> > registered domain name for 97-86-88-18, but this doesn't help.)
>> >
>> > To reply to Ryan's question, my ifconfig gives:
>> >
>> > eth0      Link encap:Ethernet  HWaddr 00:24:E8:01:DA:B8
>> >          inet addr:10.0.0.2  Bcast:10.0.0.255  Mask:255.255.255.0
>> >          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>> >          RX packets:319475 errors:0 dropped:0 overruns:0 frame:0
>> >          TX packets:290698 errors:0 dropped:0 overruns:0 carrier:0
>> >          collisions:0 txqueuelen:1000
>> >          RX bytes:108186958 (103.1 MiB)  TX bytes:187845633 (179.1 MiB)
>> >          Interrupt:28 Base address:0xa000
>> >
>> > lo        Link encap:Local Loopback
>> >          inet addr:127.0.0.1  Mask:255.0.0.0
>> >          UP LOOPBACK RUNNING  MTU:16436  Metric:1
>> >          RX packets:370795 errors:0 dropped:0 overruns:0 frame:0
>> >          TX packets:370795 errors:0 dropped:0 overruns:0 carrier:0
>> >          collisions:0 txqueuelen:0
>> >          RX bytes:108117402 (103.1 MiB)  TX bytes:108117402 (103.1 MiB)
>> >
>> > Thanks a bunch!
>> >
>> > Michael
>> >
>> > On Thu, Sep 16, 2010 at 1:12 AM, N.N. Gesli <nn...@gmail.com> wrote:
>> >
>> > > Hi Michael,
>> > >
>> > > I was having a similar problem and following this thread for any
>> > > suggestions. I tried everything suggested and more.
>> > >
>> > > I was trying to run Hadoop/Hbase pseudo distributed version on my Mac.
>> I
>> > > initially started with Hadoop 21.0 and Hbase 0.89 versions. I had
>> exactly
>> > > the same error that you were getting. Then switched to Hadoop 20.2 and
>> > > Hbase
>> > > 20.6 - still HMaster was not starting. Then finally it worked. Below
>> are
>> > my
>> > > steps to success :)
>> > >
>> > > * stopped hbase
>> > > * stopped hadoop
>> > > * run jps; RegionServer was still running; killed it manually
>> > > * in tmp directory (where hadoop namenode and *.pid files are stored) I
>> > > removed everything related to hadoop and hbase, including the
>> > directories.
>> > > (I had no data in Hadoop, so I could do this)
>> > > * changed the ports back to default 600**
>> > > * changed back Hadoop and Hbase configurations to "localhost" in
>> > *site*.xml
>> > > and regionservers. (Only I will be using this - no remote connection)
>> > > * changed back my /etc/hosts to the original version. It looks like
>> this:
>> > > 127.0.0.1    localhost
>> > > ::1             localhost
>> > > fe80::1%lo0    localhost
>> > > * reformatted the Hadoop namenode
>> > > * started Hadoop
>> > > * started HBase and it worked :)
>> > >
>> > > Let me know if you want to know any specific configuration.
>> > >
>> > > N.Gesli
>> > >
>> > > On Wed, Sep 15, 2010 at 10:41 PM, Ryan Rawson <ry...@gmail.com>
>> > wrote:
>> > >
>> > > > What is your ifconfig output looking like?
>> > > >
>> > > >
>> > > >
>> > > > On Wed, Sep 15, 2010 at 10:07 PM, Michael Scott <
>> mjscottuic@gmail.com>
>> > > > wrote:
>> > > > > Thanks for the continued advice.  I am still confused by the
>> > different
>> > > > > behaviors of hadoop and hbase. As I said before, I can't get hbase
>> to
>> > > > work
>> > > > > on any of the ports that hadoop works on, so I guess hadoop and
>> hbase
>> > > are
>> > > > > using different interfaces.  Why is this, and can't I ask hbase to
>> > use
>> > > > the
>> > > > > interface that hadoop uses?  What interfaces are hadoop and hbase
>> > > using?
>> > > > >
>> > > > > Also (and maybe this is the wrong forum for this question), how can
>> I
>> > > get
>> > > > my
>> > > > > OS to allow me to open 60000 using the IP address?  I have
>> > temporarily
>> > > > > disabled selinux and iptables, as I thought that this would simply
>> > > allow
>> > > > all
>> > > > > port connections. Still, this works just fine:
>> > > > > bash-4.0$ nc -l  60000 > /tmp/nc.out
>> > > > >
>> > > > > but this does not:
>> > > > > bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
>> > > > > (returns "nc: Cannot assign requested address"; I get the same
>> error
>> > > for
>> > > > the
>> > > > > hostname instead of the IP address, and for 10.0.0.1, but 10.0.0.0
>> is
>> > > > > allowed)
>> > > > >
>> > > > > I am trying to get hbase running for a socorro server, which will
>> > > running
>> > > > > locally.  I don't know if that matters.
>> > > > >
>> > > > > Thanks,
>> > > > > Michael
>> > > > >
>> > > > > On Wed, Sep 15, 2010 at 6:04 PM, Ryan Rawson <ry...@gmail.com>
>> > > wrote:
>> > > > >
>> > > > >> Hey,
>> > > > >>
>> > > > >> If you bind to localhost you wont actually be reachable by anyone!
>> > > > >>
>> > > > >> The question is why is your OS disallowing binds to a specific
>> > > > >> interface/port combo?
>> > > > >>
>> > > > >> HBase does not really run in a blended/multihomed environment...
>> > > > >> meaning if you have multiple interfaces, you have to choose one
>> that
>> > > > >> we work over.  This is because we need to know a singular
>> canonical
>> > > > >> IP/name for any given server because we put that info up inside
>> > > > >> ZooKeeper and META tables.  So it's not just an artificial
>> > constraint,
>> > > > >> but exists for cluster management needs.
>> > > > >>
>> > > > >> Having said that, we do work on multihomed machines, eg: ec2, you
>> > > > >> might bind hbase to the internal interface taking advantage of the
>> > > > >> unmetered/faster network. Also better for security as well.
>> > > > >>
>> > > > >> Let us know if you need more background on how we use the network
>> > and
>> > > > why.
>> > > > >> -ryan
>> > > > >>
>> > > > >> On Wed, Sep 15, 2010 at 10:18 AM, Michael Scott <
>> > mjscottuic@gmail.com
>> > > >
>> > > > >> wrote:
>> > > > >> > Hi again,
>> > > > >> >
>> > > > >> > I think the hbase server master is not starting because it is
>> > > > attempting
>> > > > >> to
>> > > > >> > open port 60000 on its public IP address, rather than using
>> > > localhost.
>> > > >  I
>> > > > >> > cannot seem to figure out how to force it (well, configure it)
>> to
>> > > > attempt
>> > > > >> to
>> > > > >> > bind to localhost:60000 instead.  As far as I can see,  this is
>> > set
>> > > in
>> > > > >> the
>> > > > >> > file:
>> > > > >> >
>> > > > >> > org/apache/hadoop/hbase/master/HMaster.java
>> > > > >> >
>> > > > >> > I don't know much about java, so I'd prefer not to edit the
>> source
>> > > if
>> > > > >> there
>> > > > >> > is an option, but I will if necessary.  Can someone please point
>> > me
>> > > to
>> > > > >> the
>> > > > >> > way to change this setting?  Any help would be greatly
>> > appreciated.
>> > > > >> >
>> > > > >> > Thanks,
>> > > > >> > Michael
>> > > > >> >
>> > > > >> > On Wed, Sep 15, 2010 at 12:42 AM, Michael Scott <
>> > > mjscottuic@gmail.com
>> > > > >> >wrote:
>> > > > >> >
>> > > > >> >> Hi again,
>> > > > >> >>
>> > > > >> >> IPV6 was enabled.  I shut it off, rebooted to be sure, verified
>> > it
>> > > > was
>> > > > >> >> still off, and encountered the same problem once again.
>> > > > >> >>
>> > > > >> >> I also tried to open port 60000 by hand with a small php file.
>>  I
>> > > can
>> > > > do
>> > > > >> >> this (as any user) for localhost.  I can NOT do this (not even
>> as
>> > > > root)
>> > > > >> for
>> > > > >> >> the IP address which matches the fully qualified domain name,
>> > which
>> > > > is
>> > > > >> what
>> > > > >> >> hbase is trying to use.  Is there some way for me to configure
>> > > hbase
>> > > > to
>> > > > >> use
>> > > > >> >> localhost instead of the fully qualified domain name for the
>> > > master?
>> > > >  I
>> > > > >> >> would have thought this was done by default, or that there
>> would
>> > be
>> > > > an
>> > > > >> >> obvious line in some conf file, but I can't find it.
>> > > > >> >>
>> > > > >> >> Thanks again,
>> > > > >> >>
>> > > > >> >> Michael
>> > > > >> >>
>> > > > >> >> On Tue, Sep 14, 2010 at 12:23 PM, Todd Lipcon <
>> todd@cloudera.com
>> > >
>> > > > >> wrote:
>> > > > >> >>
>> > > > >> >>> Hi Michael,
>> > > > >> >>>
>> > > > >> >>> It might be related to IPV6. Do you have IPV6 enabled on this
>> > > > machine?
>> > > > >> >>>
>> > > > >> >>> Check out this hadoop JIRA that might be related for some
>> tips:
>> > > > >> >>> https://issues.apache.org/jira/browse/HADOOP-6056
>> > > > >> >>>
>> > > > >> >>> <https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
>> > > > >> >>>
>> > > > >> >>> On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <
>> > > > mjscottuic@gmail.com
>> > > > >> >>> >wrote:
>> > > > >> >>>
>> > > > >> >>> > That's correct.  I tried a number of different ports to see
>> if
>> > > > there
>> > > > >> was
>> > > > >> >>> > something weird, and then I shut down the hadoop server and
>> > > tried
>> > > > to
>> > > > >> >>> > connect
>> > > > >> >>> > to 50010 (which of course should have been free at that
>> point)
>> > > but
>> > > > >> got
>> > > > >> >>> the
>> > > > >> >>> > same "cannot assign to requested address" error.  If I start
>> > > > hadoop,
>> > > > >> >>> > netstat
>> > > > >> >>> > shows a process listening on 50010.
>> > > > >> >>> >
>> > > > >> >>> > I am going to try this on a different OS, I am wondering if
>> > FC11
>> > > > is
>> > > > >> my
>> > > > >> >>> > problem.
>> > > > >> >>> >
>> > > > >> >>> > Michael
>> > > > >> >>> >
>> > > > >> >>> > On Tue, Sep 14, 2010 at 11:41 AM, Stack <st...@duboce.net>
>> > > wrote:
>> > > > >> >>> >
>> > > > >> >>> > > On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
>> > > > >> mjscottuic@gmail.com>
>> > > > >> >>> > > wrote:
>> > > > >> >>> > > > I don't see why hadoop binds
>> > > > >> >>> > > > to a port but hbase does not (I even tried starting
>> hbase
>> > > with
>> > > > >> >>> hadoop
>> > > > >> >>> > off
>> > > > >> >>> > > > and binding to 50010, which hadoop uses).
>> > > > >> >>> > > >
>> > > > >> >>> > >
>> > > > >> >>> > > Using 50010 worked for hadoop but not for hbase?  (Odd.
>>  We
>> > > > hadoop
>> > > > >> >>> > > their mechanism essentially).
>> > > > >> >>> > >
>> > > > >> >>> > > St.Ack
>> > > > >> >>> > >
>> > > > >> >>> >
>> > > > >> >>>
>> > > > >> >>>
>> > > > >> >>>
>> > > > >> >>> --
>> > > > >> >>> Todd Lipcon
>> > > > >> >>> Software Engineer, Cloudera
>> > > > >> >>>
>> > > > >> >>
>> > > > >> >>
>> > > > >> >
>> > > > >>
>> > > > >
>> > > >
>> > >
>> >
>>
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Michael Scott <mj...@gmail.com>.
Thanks again.  This changes the behavior, but it does not yet fix my
problem.  The hbase.rootdir property forces the hbase master to stay alive
for a little while, so I had a moment of short-lived euphoria when Hmaster
appeared in the jps list, but this only lasts while it tries to connect to
localhost:9000 (which is not open), and it still doesn't open port 60000 and
it still thinks it is named my-static-ip.com (i.e., same error message as
before).  The removal of localhost.localdomain from /etc/hosts made no
difference one way or the other.  I still am looking for a way to try to
have hbase bind to localhost:6000 instead of my-static-ip.com:6000.  I will
also try to see why localhost:9000 is not open (though that appears later in
the log file, so I don't think it is causing the failure to open 60000).
Thanks for the help so far, I will post again with further info.

Michael

On Thu, Sep 16, 2010 at 12:53 PM, N.N. Gesli <nn...@gmail.com> wrote:

> I have this in hbase-site.xml:
>
>  <property>
>    <name>hbase.rootdir</name>
>    <value>hdfs://localhost:9000/hbase</value>
>    <description>The directory shared by region servers.
>    Should be fully-qualified to include the filesystem to use.
>    E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
>    </description>
>  </property>
>
>  <property>
>    <name>hbase.cluster.distributed</name>
>    <value>true</value>
>    <description>For psuedo-distributed, you want to set this to true.
>    false means that HBase tries to put Master + RegionServers in one
> process.
>    Pseudo-distributed = seperate processes/pids</description>
>  </property>
>
>  <property>
>    <name>hbase.regionserver.hlog.replication</name>
>    <value>1</value>
>    <description>For HBase to offer good data durability, we roll logs if
>    filesystem replication falls below a certain amount.  In
> psuedo-distributed
>    mode, you normally only have the local filesystem or 1 HDFS DataNode, so
> you
>    don't want to roll logs constantly.</description>
>  </property>
>
>  <property>
>    <name>hbase.tmp.dir</name>
>    <value>/tmp/hbase-testing</value>
>    <description>Temporary directory on the local filesystem.</description>
>  </property>
>
> I also hase Hadoop conf directory in HBASE_CLASSPATH (hbase-env.sh).
>
> I just tried etc/hosts with "127.0.0.1               localhost.localdomain
> localhost" line. I got the same error I was getting before. I switched it
> back to "127.0.0.1       localhost" and it worked. In between those
> changes,
> I stopped hbase, hadoop and killed still running region server. I hope that
> helps.
>
> N.Gesli
>
>
>
>
> On Thu, Sep 16, 2010 at 7:04 AM, Michael Scott <mj...@gmail.com>
> wrote:
>
> > This sounds promising, I have one quick question about your steps:  where
> > in
> > the Hbase config *site*.xml did you make the change back to localhost?
>  My
> > hbase master is using the public IP address (97.86.88.18), and I don't
> > think
> > I've told it to.  I want to convince hbase to get rid of the line in the
> > log
> > file that says something like:
> >
> > 2010-09-16 09:59:21,727 INFO org.apache.hadoop.hbase.master.HMaster: My
> > address is 97-86-88-18.static.aldl.mi.charter.com:60000
> >
> > (Note that my /etc/hosts has only the one line
> > 127.0.0.1               localhost.localdomain localhost
> > since I'm not running ipv6, but somehow hbase knows that the interface is
> a
> > comcast static address.  I can use /etc/hosts to change that to the
> > registered domain name for 97-86-88-18, but this doesn't help.)
> >
> > To reply to Ryan's question, my ifconfig gives:
> >
> > eth0      Link encap:Ethernet  HWaddr 00:24:E8:01:DA:B8
> >          inet addr:10.0.0.2  Bcast:10.0.0.255  Mask:255.255.255.0
> >          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> >          RX packets:319475 errors:0 dropped:0 overruns:0 frame:0
> >          TX packets:290698 errors:0 dropped:0 overruns:0 carrier:0
> >          collisions:0 txqueuelen:1000
> >          RX bytes:108186958 (103.1 MiB)  TX bytes:187845633 (179.1 MiB)
> >          Interrupt:28 Base address:0xa000
> >
> > lo        Link encap:Local Loopback
> >          inet addr:127.0.0.1  Mask:255.0.0.0
> >          UP LOOPBACK RUNNING  MTU:16436  Metric:1
> >          RX packets:370795 errors:0 dropped:0 overruns:0 frame:0
> >          TX packets:370795 errors:0 dropped:0 overruns:0 carrier:0
> >          collisions:0 txqueuelen:0
> >          RX bytes:108117402 (103.1 MiB)  TX bytes:108117402 (103.1 MiB)
> >
> > Thanks a bunch!
> >
> > Michael
> >
> > On Thu, Sep 16, 2010 at 1:12 AM, N.N. Gesli <nn...@gmail.com> wrote:
> >
> > > Hi Michael,
> > >
> > > I was having a similar problem and following this thread for any
> > > suggestions. I tried everything suggested and more.
> > >
> > > I was trying to run Hadoop/Hbase pseudo distributed version on my Mac.
> I
> > > initially started with Hadoop 21.0 and Hbase 0.89 versions. I had
> exactly
> > > the same error that you were getting. Then switched to Hadoop 20.2 and
> > > Hbase
> > > 20.6 - still HMaster was not starting. Then finally it worked. Below
> are
> > my
> > > steps to success :)
> > >
> > > * stopped hbase
> > > * stopped hadoop
> > > * run jps; RegionServer was still running; killed it manually
> > > * in tmp directory (where hadoop namenode and *.pid files are stored) I
> > > removed everything related to hadoop and hbase, including the
> > directories.
> > > (I had no data in Hadoop, so I could do this)
> > > * changed the ports back to default 600**
> > > * changed back Hadoop and Hbase configurations to "localhost" in
> > *site*.xml
> > > and regionservers. (Only I will be using this - no remote connection)
> > > * changed back my /etc/hosts to the original version. It looks like
> this:
> > > 127.0.0.1    localhost
> > > ::1             localhost
> > > fe80::1%lo0    localhost
> > > * reformatted the Hadoop namenode
> > > * started Hadoop
> > > * started HBase and it worked :)
> > >
> > > Let me know if you want to know any specific configuration.
> > >
> > > N.Gesli
> > >
> > > On Wed, Sep 15, 2010 at 10:41 PM, Ryan Rawson <ry...@gmail.com>
> > wrote:
> > >
> > > > What is your ifconfig output looking like?
> > > >
> > > >
> > > >
> > > > On Wed, Sep 15, 2010 at 10:07 PM, Michael Scott <
> mjscottuic@gmail.com>
> > > > wrote:
> > > > > Thanks for the continued advice.  I am still confused by the
> > different
> > > > > behaviors of hadoop and hbase. As I said before, I can't get hbase
> to
> > > > work
> > > > > on any of the ports that hadoop works on, so I guess hadoop and
> hbase
> > > are
> > > > > using different interfaces.  Why is this, and can't I ask hbase to
> > use
> > > > the
> > > > > interface that hadoop uses?  What interfaces are hadoop and hbase
> > > using?
> > > > >
> > > > > Also (and maybe this is the wrong forum for this question), how can
> I
> > > get
> > > > my
> > > > > OS to allow me to open 60000 using the IP address?  I have
> > temporarily
> > > > > disabled selinux and iptables, as I thought that this would simply
> > > allow
> > > > all
> > > > > port connections. Still, this works just fine:
> > > > > bash-4.0$ nc -l  60000 > /tmp/nc.out
> > > > >
> > > > > but this does not:
> > > > > bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
> > > > > (returns "nc: Cannot assign requested address"; I get the same
> error
> > > for
> > > > the
> > > > > hostname instead of the IP address, and for 10.0.0.1, but 10.0.0.0
> is
> > > > > allowed)
> > > > >
> > > > > I am trying to get hbase running for a socorro server, which will
> > > running
> > > > > locally.  I don't know if that matters.
> > > > >
> > > > > Thanks,
> > > > > Michael
> > > > >
> > > > > On Wed, Sep 15, 2010 at 6:04 PM, Ryan Rawson <ry...@gmail.com>
> > > wrote:
> > > > >
> > > > >> Hey,
> > > > >>
> > > > >> If you bind to localhost you wont actually be reachable by anyone!
> > > > >>
> > > > >> The question is why is your OS disallowing binds to a specific
> > > > >> interface/port combo?
> > > > >>
> > > > >> HBase does not really run in a blended/multihomed environment...
> > > > >> meaning if you have multiple interfaces, you have to choose one
> that
> > > > >> we work over.  This is because we need to know a singular
> canonical
> > > > >> IP/name for any given server because we put that info up inside
> > > > >> ZooKeeper and META tables.  So it's not just an artificial
> > constraint,
> > > > >> but exists for cluster management needs.
> > > > >>
> > > > >> Having said that, we do work on multihomed machines, eg: ec2, you
> > > > >> might bind hbase to the internal interface taking advantage of the
> > > > >> unmetered/faster network. Also better for security as well.
> > > > >>
> > > > >> Let us know if you need more background on how we use the network
> > and
> > > > why.
> > > > >> -ryan
> > > > >>
> > > > >> On Wed, Sep 15, 2010 at 10:18 AM, Michael Scott <
> > mjscottuic@gmail.com
> > > >
> > > > >> wrote:
> > > > >> > Hi again,
> > > > >> >
> > > > >> > I think the hbase server master is not starting because it is
> > > > attempting
> > > > >> to
> > > > >> > open port 60000 on its public IP address, rather than using
> > > localhost.
> > > >  I
> > > > >> > cannot seem to figure out how to force it (well, configure it)
> to
> > > > attempt
> > > > >> to
> > > > >> > bind to localhost:60000 instead.  As far as I can see,  this is
> > set
> > > in
> > > > >> the
> > > > >> > file:
> > > > >> >
> > > > >> > org/apache/hadoop/hbase/master/HMaster.java
> > > > >> >
> > > > >> > I don't know much about java, so I'd prefer not to edit the
> source
> > > if
> > > > >> there
> > > > >> > is an option, but I will if necessary.  Can someone please point
> > me
> > > to
> > > > >> the
> > > > >> > way to change this setting?  Any help would be greatly
> > appreciated.
> > > > >> >
> > > > >> > Thanks,
> > > > >> > Michael
> > > > >> >
> > > > >> > On Wed, Sep 15, 2010 at 12:42 AM, Michael Scott <
> > > mjscottuic@gmail.com
> > > > >> >wrote:
> > > > >> >
> > > > >> >> Hi again,
> > > > >> >>
> > > > >> >> IPV6 was enabled.  I shut it off, rebooted to be sure, verified
> > it
> > > > was
> > > > >> >> still off, and encountered the same problem once again.
> > > > >> >>
> > > > >> >> I also tried to open port 60000 by hand with a small php file.
>  I
> > > can
> > > > do
> > > > >> >> this (as any user) for localhost.  I can NOT do this (not even
> as
> > > > root)
> > > > >> for
> > > > >> >> the IP address which matches the fully qualified domain name,
> > which
> > > > is
> > > > >> what
> > > > >> >> hbase is trying to use.  Is there some way for me to configure
> > > hbase
> > > > to
> > > > >> use
> > > > >> >> localhost instead of the fully qualified domain name for the
> > > master?
> > > >  I
> > > > >> >> would have thought this was done by default, or that there
> would
> > be
> > > > an
> > > > >> >> obvious line in some conf file, but I can't find it.
> > > > >> >>
> > > > >> >> Thanks again,
> > > > >> >>
> > > > >> >> Michael
> > > > >> >>
> > > > >> >> On Tue, Sep 14, 2010 at 12:23 PM, Todd Lipcon <
> todd@cloudera.com
> > >
> > > > >> wrote:
> > > > >> >>
> > > > >> >>> Hi Michael,
> > > > >> >>>
> > > > >> >>> It might be related to IPV6. Do you have IPV6 enabled on this
> > > > machine?
> > > > >> >>>
> > > > >> >>> Check out this hadoop JIRA that might be related for some
> tips:
> > > > >> >>> https://issues.apache.org/jira/browse/HADOOP-6056
> > > > >> >>>
> > > > >> >>> <https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
> > > > >> >>>
> > > > >> >>> On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <
> > > > mjscottuic@gmail.com
> > > > >> >>> >wrote:
> > > > >> >>>
> > > > >> >>> > That's correct.  I tried a number of different ports to see
> if
> > > > there
> > > > >> was
> > > > >> >>> > something weird, and then I shut down the hadoop server and
> > > tried
> > > > to
> > > > >> >>> > connect
> > > > >> >>> > to 50010 (which of course should have been free at that
> point)
> > > but
> > > > >> got
> > > > >> >>> the
> > > > >> >>> > same "cannot assign to requested address" error.  If I start
> > > > hadoop,
> > > > >> >>> > netstat
> > > > >> >>> > shows a process listening on 50010.
> > > > >> >>> >
> > > > >> >>> > I am going to try this on a different OS, I am wondering if
> > FC11
> > > > is
> > > > >> my
> > > > >> >>> > problem.
> > > > >> >>> >
> > > > >> >>> > Michael
> > > > >> >>> >
> > > > >> >>> > On Tue, Sep 14, 2010 at 11:41 AM, Stack <st...@duboce.net>
> > > wrote:
> > > > >> >>> >
> > > > >> >>> > > On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
> > > > >> mjscottuic@gmail.com>
> > > > >> >>> > > wrote:
> > > > >> >>> > > > I don't see why hadoop binds
> > > > >> >>> > > > to a port but hbase does not (I even tried starting
> hbase
> > > with
> > > > >> >>> hadoop
> > > > >> >>> > off
> > > > >> >>> > > > and binding to 50010, which hadoop uses).
> > > > >> >>> > > >
> > > > >> >>> > >
> > > > >> >>> > > Using 50010 worked for hadoop but not for hbase?  (Odd.
>  We
> > > > hadoop
> > > > >> >>> > > their mechanism essentially).
> > > > >> >>> > >
> > > > >> >>> > > St.Ack
> > > > >> >>> > >
> > > > >> >>> >
> > > > >> >>>
> > > > >> >>>
> > > > >> >>>
> > > > >> >>> --
> > > > >> >>> Todd Lipcon
> > > > >> >>> Software Engineer, Cloudera
> > > > >> >>>
> > > > >> >>
> > > > >> >>
> > > > >> >
> > > > >>
> > > > >
> > > >
> > >
> >
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by "N.N. Gesli" <nn...@gmail.com>.
I have this in hbase-site.xml:

  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://localhost:9000/hbase</value>
    <description>The directory shared by region servers.
    Should be fully-qualified to include the filesystem to use.
    E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
    </description>
  </property>

  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    <description>For psuedo-distributed, you want to set this to true.
    false means that HBase tries to put Master + RegionServers in one
process.
    Pseudo-distributed = seperate processes/pids</description>
  </property>

  <property>
    <name>hbase.regionserver.hlog.replication</name>
    <value>1</value>
    <description>For HBase to offer good data durability, we roll logs if
    filesystem replication falls below a certain amount.  In
psuedo-distributed
    mode, you normally only have the local filesystem or 1 HDFS DataNode, so
you
    don't want to roll logs constantly.</description>
  </property>

  <property>
    <name>hbase.tmp.dir</name>
    <value>/tmp/hbase-testing</value>
    <description>Temporary directory on the local filesystem.</description>
  </property>

I also hase Hadoop conf directory in HBASE_CLASSPATH (hbase-env.sh).

I just tried etc/hosts with "127.0.0.1               localhost.localdomain
localhost" line. I got the same error I was getting before. I switched it
back to "127.0.0.1       localhost" and it worked. In between those changes,
I stopped hbase, hadoop and killed still running region server. I hope that
helps.

N.Gesli




On Thu, Sep 16, 2010 at 7:04 AM, Michael Scott <mj...@gmail.com> wrote:

> This sounds promising, I have one quick question about your steps:  where
> in
> the Hbase config *site*.xml did you make the change back to localhost?  My
> hbase master is using the public IP address (97.86.88.18), and I don't
> think
> I've told it to.  I want to convince hbase to get rid of the line in the
> log
> file that says something like:
>
> 2010-09-16 09:59:21,727 INFO org.apache.hadoop.hbase.master.HMaster: My
> address is 97-86-88-18.static.aldl.mi.charter.com:60000
>
> (Note that my /etc/hosts has only the one line
> 127.0.0.1               localhost.localdomain localhost
> since I'm not running ipv6, but somehow hbase knows that the interface is a
> comcast static address.  I can use /etc/hosts to change that to the
> registered domain name for 97-86-88-18, but this doesn't help.)
>
> To reply to Ryan's question, my ifconfig gives:
>
> eth0      Link encap:Ethernet  HWaddr 00:24:E8:01:DA:B8
>          inet addr:10.0.0.2  Bcast:10.0.0.255  Mask:255.255.255.0
>          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>          RX packets:319475 errors:0 dropped:0 overruns:0 frame:0
>          TX packets:290698 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:1000
>          RX bytes:108186958 (103.1 MiB)  TX bytes:187845633 (179.1 MiB)
>          Interrupt:28 Base address:0xa000
>
> lo        Link encap:Local Loopback
>          inet addr:127.0.0.1  Mask:255.0.0.0
>          UP LOOPBACK RUNNING  MTU:16436  Metric:1
>          RX packets:370795 errors:0 dropped:0 overruns:0 frame:0
>          TX packets:370795 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:0
>          RX bytes:108117402 (103.1 MiB)  TX bytes:108117402 (103.1 MiB)
>
> Thanks a bunch!
>
> Michael
>
> On Thu, Sep 16, 2010 at 1:12 AM, N.N. Gesli <nn...@gmail.com> wrote:
>
> > Hi Michael,
> >
> > I was having a similar problem and following this thread for any
> > suggestions. I tried everything suggested and more.
> >
> > I was trying to run Hadoop/Hbase pseudo distributed version on my Mac. I
> > initially started with Hadoop 21.0 and Hbase 0.89 versions. I had exactly
> > the same error that you were getting. Then switched to Hadoop 20.2 and
> > Hbase
> > 20.6 - still HMaster was not starting. Then finally it worked. Below are
> my
> > steps to success :)
> >
> > * stopped hbase
> > * stopped hadoop
> > * run jps; RegionServer was still running; killed it manually
> > * in tmp directory (where hadoop namenode and *.pid files are stored) I
> > removed everything related to hadoop and hbase, including the
> directories.
> > (I had no data in Hadoop, so I could do this)
> > * changed the ports back to default 600**
> > * changed back Hadoop and Hbase configurations to "localhost" in
> *site*.xml
> > and regionservers. (Only I will be using this - no remote connection)
> > * changed back my /etc/hosts to the original version. It looks like this:
> > 127.0.0.1    localhost
> > ::1             localhost
> > fe80::1%lo0    localhost
> > * reformatted the Hadoop namenode
> > * started Hadoop
> > * started HBase and it worked :)
> >
> > Let me know if you want to know any specific configuration.
> >
> > N.Gesli
> >
> > On Wed, Sep 15, 2010 at 10:41 PM, Ryan Rawson <ry...@gmail.com>
> wrote:
> >
> > > What is your ifconfig output looking like?
> > >
> > >
> > >
> > > On Wed, Sep 15, 2010 at 10:07 PM, Michael Scott <mj...@gmail.com>
> > > wrote:
> > > > Thanks for the continued advice.  I am still confused by the
> different
> > > > behaviors of hadoop and hbase. As I said before, I can't get hbase to
> > > work
> > > > on any of the ports that hadoop works on, so I guess hadoop and hbase
> > are
> > > > using different interfaces.  Why is this, and can't I ask hbase to
> use
> > > the
> > > > interface that hadoop uses?  What interfaces are hadoop and hbase
> > using?
> > > >
> > > > Also (and maybe this is the wrong forum for this question), how can I
> > get
> > > my
> > > > OS to allow me to open 60000 using the IP address?  I have
> temporarily
> > > > disabled selinux and iptables, as I thought that this would simply
> > allow
> > > all
> > > > port connections. Still, this works just fine:
> > > > bash-4.0$ nc -l  60000 > /tmp/nc.out
> > > >
> > > > but this does not:
> > > > bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
> > > > (returns "nc: Cannot assign requested address"; I get the same error
> > for
> > > the
> > > > hostname instead of the IP address, and for 10.0.0.1, but 10.0.0.0 is
> > > > allowed)
> > > >
> > > > I am trying to get hbase running for a socorro server, which will
> > running
> > > > locally.  I don't know if that matters.
> > > >
> > > > Thanks,
> > > > Michael
> > > >
> > > > On Wed, Sep 15, 2010 at 6:04 PM, Ryan Rawson <ry...@gmail.com>
> > wrote:
> > > >
> > > >> Hey,
> > > >>
> > > >> If you bind to localhost you wont actually be reachable by anyone!
> > > >>
> > > >> The question is why is your OS disallowing binds to a specific
> > > >> interface/port combo?
> > > >>
> > > >> HBase does not really run in a blended/multihomed environment...
> > > >> meaning if you have multiple interfaces, you have to choose one that
> > > >> we work over.  This is because we need to know a singular canonical
> > > >> IP/name for any given server because we put that info up inside
> > > >> ZooKeeper and META tables.  So it's not just an artificial
> constraint,
> > > >> but exists for cluster management needs.
> > > >>
> > > >> Having said that, we do work on multihomed machines, eg: ec2, you
> > > >> might bind hbase to the internal interface taking advantage of the
> > > >> unmetered/faster network. Also better for security as well.
> > > >>
> > > >> Let us know if you need more background on how we use the network
> and
> > > why.
> > > >> -ryan
> > > >>
> > > >> On Wed, Sep 15, 2010 at 10:18 AM, Michael Scott <
> mjscottuic@gmail.com
> > >
> > > >> wrote:
> > > >> > Hi again,
> > > >> >
> > > >> > I think the hbase server master is not starting because it is
> > > attempting
> > > >> to
> > > >> > open port 60000 on its public IP address, rather than using
> > localhost.
> > >  I
> > > >> > cannot seem to figure out how to force it (well, configure it) to
> > > attempt
> > > >> to
> > > >> > bind to localhost:60000 instead.  As far as I can see,  this is
> set
> > in
> > > >> the
> > > >> > file:
> > > >> >
> > > >> > org/apache/hadoop/hbase/master/HMaster.java
> > > >> >
> > > >> > I don't know much about java, so I'd prefer not to edit the source
> > if
> > > >> there
> > > >> > is an option, but I will if necessary.  Can someone please point
> me
> > to
> > > >> the
> > > >> > way to change this setting?  Any help would be greatly
> appreciated.
> > > >> >
> > > >> > Thanks,
> > > >> > Michael
> > > >> >
> > > >> > On Wed, Sep 15, 2010 at 12:42 AM, Michael Scott <
> > mjscottuic@gmail.com
> > > >> >wrote:
> > > >> >
> > > >> >> Hi again,
> > > >> >>
> > > >> >> IPV6 was enabled.  I shut it off, rebooted to be sure, verified
> it
> > > was
> > > >> >> still off, and encountered the same problem once again.
> > > >> >>
> > > >> >> I also tried to open port 60000 by hand with a small php file.  I
> > can
> > > do
> > > >> >> this (as any user) for localhost.  I can NOT do this (not even as
> > > root)
> > > >> for
> > > >> >> the IP address which matches the fully qualified domain name,
> which
> > > is
> > > >> what
> > > >> >> hbase is trying to use.  Is there some way for me to configure
> > hbase
> > > to
> > > >> use
> > > >> >> localhost instead of the fully qualified domain name for the
> > master?
> > >  I
> > > >> >> would have thought this was done by default, or that there would
> be
> > > an
> > > >> >> obvious line in some conf file, but I can't find it.
> > > >> >>
> > > >> >> Thanks again,
> > > >> >>
> > > >> >> Michael
> > > >> >>
> > > >> >> On Tue, Sep 14, 2010 at 12:23 PM, Todd Lipcon <todd@cloudera.com
> >
> > > >> wrote:
> > > >> >>
> > > >> >>> Hi Michael,
> > > >> >>>
> > > >> >>> It might be related to IPV6. Do you have IPV6 enabled on this
> > > machine?
> > > >> >>>
> > > >> >>> Check out this hadoop JIRA that might be related for some tips:
> > > >> >>> https://issues.apache.org/jira/browse/HADOOP-6056
> > > >> >>>
> > > >> >>> <https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
> > > >> >>>
> > > >> >>> On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <
> > > mjscottuic@gmail.com
> > > >> >>> >wrote:
> > > >> >>>
> > > >> >>> > That's correct.  I tried a number of different ports to see if
> > > there
> > > >> was
> > > >> >>> > something weird, and then I shut down the hadoop server and
> > tried
> > > to
> > > >> >>> > connect
> > > >> >>> > to 50010 (which of course should have been free at that point)
> > but
> > > >> got
> > > >> >>> the
> > > >> >>> > same "cannot assign to requested address" error.  If I start
> > > hadoop,
> > > >> >>> > netstat
> > > >> >>> > shows a process listening on 50010.
> > > >> >>> >
> > > >> >>> > I am going to try this on a different OS, I am wondering if
> FC11
> > > is
> > > >> my
> > > >> >>> > problem.
> > > >> >>> >
> > > >> >>> > Michael
> > > >> >>> >
> > > >> >>> > On Tue, Sep 14, 2010 at 11:41 AM, Stack <st...@duboce.net>
> > wrote:
> > > >> >>> >
> > > >> >>> > > On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
> > > >> mjscottuic@gmail.com>
> > > >> >>> > > wrote:
> > > >> >>> > > > I don't see why hadoop binds
> > > >> >>> > > > to a port but hbase does not (I even tried starting hbase
> > with
> > > >> >>> hadoop
> > > >> >>> > off
> > > >> >>> > > > and binding to 50010, which hadoop uses).
> > > >> >>> > > >
> > > >> >>> > >
> > > >> >>> > > Using 50010 worked for hadoop but not for hbase?  (Odd.  We
> > > hadoop
> > > >> >>> > > their mechanism essentially).
> > > >> >>> > >
> > > >> >>> > > St.Ack
> > > >> >>> > >
> > > >> >>> >
> > > >> >>>
> > > >> >>>
> > > >> >>>
> > > >> >>> --
> > > >> >>> Todd Lipcon
> > > >> >>> Software Engineer, Cloudera
> > > >> >>>
> > > >> >>
> > > >> >>
> > > >> >
> > > >>
> > > >
> > >
> >
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Michael Scott <mj...@gmail.com>.
This sounds promising, I have one quick question about your steps:  where in
the Hbase config *site*.xml did you make the change back to localhost?  My
hbase master is using the public IP address (97.86.88.18), and I don't think
I've told it to.  I want to convince hbase to get rid of the line in the log
file that says something like:

2010-09-16 09:59:21,727 INFO org.apache.hadoop.hbase.master.HMaster: My
address is 97-86-88-18.static.aldl.mi.charter.com:60000

(Note that my /etc/hosts has only the one line
127.0.0.1               localhost.localdomain localhost
since I'm not running ipv6, but somehow hbase knows that the interface is a
comcast static address.  I can use /etc/hosts to change that to the
registered domain name for 97-86-88-18, but this doesn't help.)

To reply to Ryan's question, my ifconfig gives:

eth0      Link encap:Ethernet  HWaddr 00:24:E8:01:DA:B8
          inet addr:10.0.0.2  Bcast:10.0.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:319475 errors:0 dropped:0 overruns:0 frame:0
          TX packets:290698 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:108186958 (103.1 MiB)  TX bytes:187845633 (179.1 MiB)
          Interrupt:28 Base address:0xa000

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:370795 errors:0 dropped:0 overruns:0 frame:0
          TX packets:370795 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:108117402 (103.1 MiB)  TX bytes:108117402 (103.1 MiB)

Thanks a bunch!

Michael

On Thu, Sep 16, 2010 at 1:12 AM, N.N. Gesli <nn...@gmail.com> wrote:

> Hi Michael,
>
> I was having a similar problem and following this thread for any
> suggestions. I tried everything suggested and more.
>
> I was trying to run Hadoop/Hbase pseudo distributed version on my Mac. I
> initially started with Hadoop 21.0 and Hbase 0.89 versions. I had exactly
> the same error that you were getting. Then switched to Hadoop 20.2 and
> Hbase
> 20.6 - still HMaster was not starting. Then finally it worked. Below are my
> steps to success :)
>
> * stopped hbase
> * stopped hadoop
> * run jps; RegionServer was still running; killed it manually
> * in tmp directory (where hadoop namenode and *.pid files are stored) I
> removed everything related to hadoop and hbase, including the directories.
> (I had no data in Hadoop, so I could do this)
> * changed the ports back to default 600**
> * changed back Hadoop and Hbase configurations to "localhost" in *site*.xml
> and regionservers. (Only I will be using this - no remote connection)
> * changed back my /etc/hosts to the original version. It looks like this:
> 127.0.0.1    localhost
> ::1             localhost
> fe80::1%lo0    localhost
> * reformatted the Hadoop namenode
> * started Hadoop
> * started HBase and it worked :)
>
> Let me know if you want to know any specific configuration.
>
> N.Gesli
>
> On Wed, Sep 15, 2010 at 10:41 PM, Ryan Rawson <ry...@gmail.com> wrote:
>
> > What is your ifconfig output looking like?
> >
> >
> >
> > On Wed, Sep 15, 2010 at 10:07 PM, Michael Scott <mj...@gmail.com>
> > wrote:
> > > Thanks for the continued advice.  I am still confused by the different
> > > behaviors of hadoop and hbase. As I said before, I can't get hbase to
> > work
> > > on any of the ports that hadoop works on, so I guess hadoop and hbase
> are
> > > using different interfaces.  Why is this, and can't I ask hbase to use
> > the
> > > interface that hadoop uses?  What interfaces are hadoop and hbase
> using?
> > >
> > > Also (and maybe this is the wrong forum for this question), how can I
> get
> > my
> > > OS to allow me to open 60000 using the IP address?  I have temporarily
> > > disabled selinux and iptables, as I thought that this would simply
> allow
> > all
> > > port connections. Still, this works just fine:
> > > bash-4.0$ nc -l  60000 > /tmp/nc.out
> > >
> > > but this does not:
> > > bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
> > > (returns "nc: Cannot assign requested address"; I get the same error
> for
> > the
> > > hostname instead of the IP address, and for 10.0.0.1, but 10.0.0.0 is
> > > allowed)
> > >
> > > I am trying to get hbase running for a socorro server, which will
> running
> > > locally.  I don't know if that matters.
> > >
> > > Thanks,
> > > Michael
> > >
> > > On Wed, Sep 15, 2010 at 6:04 PM, Ryan Rawson <ry...@gmail.com>
> wrote:
> > >
> > >> Hey,
> > >>
> > >> If you bind to localhost you wont actually be reachable by anyone!
> > >>
> > >> The question is why is your OS disallowing binds to a specific
> > >> interface/port combo?
> > >>
> > >> HBase does not really run in a blended/multihomed environment...
> > >> meaning if you have multiple interfaces, you have to choose one that
> > >> we work over.  This is because we need to know a singular canonical
> > >> IP/name for any given server because we put that info up inside
> > >> ZooKeeper and META tables.  So it's not just an artificial constraint,
> > >> but exists for cluster management needs.
> > >>
> > >> Having said that, we do work on multihomed machines, eg: ec2, you
> > >> might bind hbase to the internal interface taking advantage of the
> > >> unmetered/faster network. Also better for security as well.
> > >>
> > >> Let us know if you need more background on how we use the network and
> > why.
> > >> -ryan
> > >>
> > >> On Wed, Sep 15, 2010 at 10:18 AM, Michael Scott <mjscottuic@gmail.com
> >
> > >> wrote:
> > >> > Hi again,
> > >> >
> > >> > I think the hbase server master is not starting because it is
> > attempting
> > >> to
> > >> > open port 60000 on its public IP address, rather than using
> localhost.
> >  I
> > >> > cannot seem to figure out how to force it (well, configure it) to
> > attempt
> > >> to
> > >> > bind to localhost:60000 instead.  As far as I can see,  this is set
> in
> > >> the
> > >> > file:
> > >> >
> > >> > org/apache/hadoop/hbase/master/HMaster.java
> > >> >
> > >> > I don't know much about java, so I'd prefer not to edit the source
> if
> > >> there
> > >> > is an option, but I will if necessary.  Can someone please point me
> to
> > >> the
> > >> > way to change this setting?  Any help would be greatly appreciated.
> > >> >
> > >> > Thanks,
> > >> > Michael
> > >> >
> > >> > On Wed, Sep 15, 2010 at 12:42 AM, Michael Scott <
> mjscottuic@gmail.com
> > >> >wrote:
> > >> >
> > >> >> Hi again,
> > >> >>
> > >> >> IPV6 was enabled.  I shut it off, rebooted to be sure, verified it
> > was
> > >> >> still off, and encountered the same problem once again.
> > >> >>
> > >> >> I also tried to open port 60000 by hand with a small php file.  I
> can
> > do
> > >> >> this (as any user) for localhost.  I can NOT do this (not even as
> > root)
> > >> for
> > >> >> the IP address which matches the fully qualified domain name, which
> > is
> > >> what
> > >> >> hbase is trying to use.  Is there some way for me to configure
> hbase
> > to
> > >> use
> > >> >> localhost instead of the fully qualified domain name for the
> master?
> >  I
> > >> >> would have thought this was done by default, or that there would be
> > an
> > >> >> obvious line in some conf file, but I can't find it.
> > >> >>
> > >> >> Thanks again,
> > >> >>
> > >> >> Michael
> > >> >>
> > >> >> On Tue, Sep 14, 2010 at 12:23 PM, Todd Lipcon <to...@cloudera.com>
> > >> wrote:
> > >> >>
> > >> >>> Hi Michael,
> > >> >>>
> > >> >>> It might be related to IPV6. Do you have IPV6 enabled on this
> > machine?
> > >> >>>
> > >> >>> Check out this hadoop JIRA that might be related for some tips:
> > >> >>> https://issues.apache.org/jira/browse/HADOOP-6056
> > >> >>>
> > >> >>> <https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
> > >> >>>
> > >> >>> On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <
> > mjscottuic@gmail.com
> > >> >>> >wrote:
> > >> >>>
> > >> >>> > That's correct.  I tried a number of different ports to see if
> > there
> > >> was
> > >> >>> > something weird, and then I shut down the hadoop server and
> tried
> > to
> > >> >>> > connect
> > >> >>> > to 50010 (which of course should have been free at that point)
> but
> > >> got
> > >> >>> the
> > >> >>> > same "cannot assign to requested address" error.  If I start
> > hadoop,
> > >> >>> > netstat
> > >> >>> > shows a process listening on 50010.
> > >> >>> >
> > >> >>> > I am going to try this on a different OS, I am wondering if FC11
> > is
> > >> my
> > >> >>> > problem.
> > >> >>> >
> > >> >>> > Michael
> > >> >>> >
> > >> >>> > On Tue, Sep 14, 2010 at 11:41 AM, Stack <st...@duboce.net>
> wrote:
> > >> >>> >
> > >> >>> > > On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
> > >> mjscottuic@gmail.com>
> > >> >>> > > wrote:
> > >> >>> > > > I don't see why hadoop binds
> > >> >>> > > > to a port but hbase does not (I even tried starting hbase
> with
> > >> >>> hadoop
> > >> >>> > off
> > >> >>> > > > and binding to 50010, which hadoop uses).
> > >> >>> > > >
> > >> >>> > >
> > >> >>> > > Using 50010 worked for hadoop but not for hbase?  (Odd.  We
> > hadoop
> > >> >>> > > their mechanism essentially).
> > >> >>> > >
> > >> >>> > > St.Ack
> > >> >>> > >
> > >> >>> >
> > >> >>>
> > >> >>>
> > >> >>>
> > >> >>> --
> > >> >>> Todd Lipcon
> > >> >>> Software Engineer, Cloudera
> > >> >>>
> > >> >>
> > >> >>
> > >> >
> > >>
> > >
> >
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by "N.N. Gesli" <nn...@gmail.com>.
Hi Michael,

I was having a similar problem and following this thread for any
suggestions. I tried everything suggested and more.

I was trying to run Hadoop/Hbase pseudo distributed version on my Mac. I
initially started with Hadoop 21.0 and Hbase 0.89 versions. I had exactly
the same error that you were getting. Then switched to Hadoop 20.2 and Hbase
20.6 - still HMaster was not starting. Then finally it worked. Below are my
steps to success :)

* stopped hbase
* stopped hadoop
* run jps; RegionServer was still running; killed it manually
* in tmp directory (where hadoop namenode and *.pid files are stored) I
removed everything related to hadoop and hbase, including the directories.
(I had no data in Hadoop, so I could do this)
* changed the ports back to default 600**
* changed back Hadoop and Hbase configurations to "localhost" in *site*.xml
and regionservers. (Only I will be using this - no remote connection)
* changed back my /etc/hosts to the original version. It looks like this:
127.0.0.1    localhost
::1             localhost
fe80::1%lo0    localhost
* reformatted the Hadoop namenode
* started Hadoop
* started HBase and it worked :)

Let me know if you want to know any specific configuration.

N.Gesli

On Wed, Sep 15, 2010 at 10:41 PM, Ryan Rawson <ry...@gmail.com> wrote:

> What is your ifconfig output looking like?
>
>
>
> On Wed, Sep 15, 2010 at 10:07 PM, Michael Scott <mj...@gmail.com>
> wrote:
> > Thanks for the continued advice.  I am still confused by the different
> > behaviors of hadoop and hbase. As I said before, I can't get hbase to
> work
> > on any of the ports that hadoop works on, so I guess hadoop and hbase are
> > using different interfaces.  Why is this, and can't I ask hbase to use
> the
> > interface that hadoop uses?  What interfaces are hadoop and hbase using?
> >
> > Also (and maybe this is the wrong forum for this question), how can I get
> my
> > OS to allow me to open 60000 using the IP address?  I have temporarily
> > disabled selinux and iptables, as I thought that this would simply allow
> all
> > port connections. Still, this works just fine:
> > bash-4.0$ nc -l  60000 > /tmp/nc.out
> >
> > but this does not:
> > bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
> > (returns "nc: Cannot assign requested address"; I get the same error for
> the
> > hostname instead of the IP address, and for 10.0.0.1, but 10.0.0.0 is
> > allowed)
> >
> > I am trying to get hbase running for a socorro server, which will running
> > locally.  I don't know if that matters.
> >
> > Thanks,
> > Michael
> >
> > On Wed, Sep 15, 2010 at 6:04 PM, Ryan Rawson <ry...@gmail.com> wrote:
> >
> >> Hey,
> >>
> >> If you bind to localhost you wont actually be reachable by anyone!
> >>
> >> The question is why is your OS disallowing binds to a specific
> >> interface/port combo?
> >>
> >> HBase does not really run in a blended/multihomed environment...
> >> meaning if you have multiple interfaces, you have to choose one that
> >> we work over.  This is because we need to know a singular canonical
> >> IP/name for any given server because we put that info up inside
> >> ZooKeeper and META tables.  So it's not just an artificial constraint,
> >> but exists for cluster management needs.
> >>
> >> Having said that, we do work on multihomed machines, eg: ec2, you
> >> might bind hbase to the internal interface taking advantage of the
> >> unmetered/faster network. Also better for security as well.
> >>
> >> Let us know if you need more background on how we use the network and
> why.
> >> -ryan
> >>
> >> On Wed, Sep 15, 2010 at 10:18 AM, Michael Scott <mj...@gmail.com>
> >> wrote:
> >> > Hi again,
> >> >
> >> > I think the hbase server master is not starting because it is
> attempting
> >> to
> >> > open port 60000 on its public IP address, rather than using localhost.
>  I
> >> > cannot seem to figure out how to force it (well, configure it) to
> attempt
> >> to
> >> > bind to localhost:60000 instead.  As far as I can see,  this is set in
> >> the
> >> > file:
> >> >
> >> > org/apache/hadoop/hbase/master/HMaster.java
> >> >
> >> > I don't know much about java, so I'd prefer not to edit the source if
> >> there
> >> > is an option, but I will if necessary.  Can someone please point me to
> >> the
> >> > way to change this setting?  Any help would be greatly appreciated.
> >> >
> >> > Thanks,
> >> > Michael
> >> >
> >> > On Wed, Sep 15, 2010 at 12:42 AM, Michael Scott <mjscottuic@gmail.com
> >> >wrote:
> >> >
> >> >> Hi again,
> >> >>
> >> >> IPV6 was enabled.  I shut it off, rebooted to be sure, verified it
> was
> >> >> still off, and encountered the same problem once again.
> >> >>
> >> >> I also tried to open port 60000 by hand with a small php file.  I can
> do
> >> >> this (as any user) for localhost.  I can NOT do this (not even as
> root)
> >> for
> >> >> the IP address which matches the fully qualified domain name, which
> is
> >> what
> >> >> hbase is trying to use.  Is there some way for me to configure hbase
> to
> >> use
> >> >> localhost instead of the fully qualified domain name for the master?
>  I
> >> >> would have thought this was done by default, or that there would be
> an
> >> >> obvious line in some conf file, but I can't find it.
> >> >>
> >> >> Thanks again,
> >> >>
> >> >> Michael
> >> >>
> >> >> On Tue, Sep 14, 2010 at 12:23 PM, Todd Lipcon <to...@cloudera.com>
> >> wrote:
> >> >>
> >> >>> Hi Michael,
> >> >>>
> >> >>> It might be related to IPV6. Do you have IPV6 enabled on this
> machine?
> >> >>>
> >> >>> Check out this hadoop JIRA that might be related for some tips:
> >> >>> https://issues.apache.org/jira/browse/HADOOP-6056
> >> >>>
> >> >>> <https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
> >> >>>
> >> >>> On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <
> mjscottuic@gmail.com
> >> >>> >wrote:
> >> >>>
> >> >>> > That's correct.  I tried a number of different ports to see if
> there
> >> was
> >> >>> > something weird, and then I shut down the hadoop server and tried
> to
> >> >>> > connect
> >> >>> > to 50010 (which of course should have been free at that point) but
> >> got
> >> >>> the
> >> >>> > same "cannot assign to requested address" error.  If I start
> hadoop,
> >> >>> > netstat
> >> >>> > shows a process listening on 50010.
> >> >>> >
> >> >>> > I am going to try this on a different OS, I am wondering if FC11
> is
> >> my
> >> >>> > problem.
> >> >>> >
> >> >>> > Michael
> >> >>> >
> >> >>> > On Tue, Sep 14, 2010 at 11:41 AM, Stack <st...@duboce.net> wrote:
> >> >>> >
> >> >>> > > On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
> >> mjscottuic@gmail.com>
> >> >>> > > wrote:
> >> >>> > > > I don't see why hadoop binds
> >> >>> > > > to a port but hbase does not (I even tried starting hbase with
> >> >>> hadoop
> >> >>> > off
> >> >>> > > > and binding to 50010, which hadoop uses).
> >> >>> > > >
> >> >>> > >
> >> >>> > > Using 50010 worked for hadoop but not for hbase?  (Odd.  We
> hadoop
> >> >>> > > their mechanism essentially).
> >> >>> > >
> >> >>> > > St.Ack
> >> >>> > >
> >> >>> >
> >> >>>
> >> >>>
> >> >>>
> >> >>> --
> >> >>> Todd Lipcon
> >> >>> Software Engineer, Cloudera
> >> >>>
> >> >>
> >> >>
> >> >
> >>
> >
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Ryan Rawson <ry...@gmail.com>.
What is your ifconfig output looking like?



On Wed, Sep 15, 2010 at 10:07 PM, Michael Scott <mj...@gmail.com> wrote:
> Thanks for the continued advice.  I am still confused by the different
> behaviors of hadoop and hbase. As I said before, I can't get hbase to work
> on any of the ports that hadoop works on, so I guess hadoop and hbase are
> using different interfaces.  Why is this, and can't I ask hbase to use the
> interface that hadoop uses?  What interfaces are hadoop and hbase using?
>
> Also (and maybe this is the wrong forum for this question), how can I get my
> OS to allow me to open 60000 using the IP address?  I have temporarily
> disabled selinux and iptables, as I thought that this would simply allow all
> port connections. Still, this works just fine:
> bash-4.0$ nc -l  60000 > /tmp/nc.out
>
> but this does not:
> bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
> (returns "nc: Cannot assign requested address"; I get the same error for the
> hostname instead of the IP address, and for 10.0.0.1, but 10.0.0.0 is
> allowed)
>
> I am trying to get hbase running for a socorro server, which will running
> locally.  I don't know if that matters.
>
> Thanks,
> Michael
>
> On Wed, Sep 15, 2010 at 6:04 PM, Ryan Rawson <ry...@gmail.com> wrote:
>
>> Hey,
>>
>> If you bind to localhost you wont actually be reachable by anyone!
>>
>> The question is why is your OS disallowing binds to a specific
>> interface/port combo?
>>
>> HBase does not really run in a blended/multihomed environment...
>> meaning if you have multiple interfaces, you have to choose one that
>> we work over.  This is because we need to know a singular canonical
>> IP/name for any given server because we put that info up inside
>> ZooKeeper and META tables.  So it's not just an artificial constraint,
>> but exists for cluster management needs.
>>
>> Having said that, we do work on multihomed machines, eg: ec2, you
>> might bind hbase to the internal interface taking advantage of the
>> unmetered/faster network. Also better for security as well.
>>
>> Let us know if you need more background on how we use the network and why.
>> -ryan
>>
>> On Wed, Sep 15, 2010 at 10:18 AM, Michael Scott <mj...@gmail.com>
>> wrote:
>> > Hi again,
>> >
>> > I think the hbase server master is not starting because it is attempting
>> to
>> > open port 60000 on its public IP address, rather than using localhost.  I
>> > cannot seem to figure out how to force it (well, configure it) to attempt
>> to
>> > bind to localhost:60000 instead.  As far as I can see,  this is set in
>> the
>> > file:
>> >
>> > org/apache/hadoop/hbase/master/HMaster.java
>> >
>> > I don't know much about java, so I'd prefer not to edit the source if
>> there
>> > is an option, but I will if necessary.  Can someone please point me to
>> the
>> > way to change this setting?  Any help would be greatly appreciated.
>> >
>> > Thanks,
>> > Michael
>> >
>> > On Wed, Sep 15, 2010 at 12:42 AM, Michael Scott <mjscottuic@gmail.com
>> >wrote:
>> >
>> >> Hi again,
>> >>
>> >> IPV6 was enabled.  I shut it off, rebooted to be sure, verified it was
>> >> still off, and encountered the same problem once again.
>> >>
>> >> I also tried to open port 60000 by hand with a small php file.  I can do
>> >> this (as any user) for localhost.  I can NOT do this (not even as root)
>> for
>> >> the IP address which matches the fully qualified domain name, which is
>> what
>> >> hbase is trying to use.  Is there some way for me to configure hbase to
>> use
>> >> localhost instead of the fully qualified domain name for the master?  I
>> >> would have thought this was done by default, or that there would be an
>> >> obvious line in some conf file, but I can't find it.
>> >>
>> >> Thanks again,
>> >>
>> >> Michael
>> >>
>> >> On Tue, Sep 14, 2010 at 12:23 PM, Todd Lipcon <to...@cloudera.com>
>> wrote:
>> >>
>> >>> Hi Michael,
>> >>>
>> >>> It might be related to IPV6. Do you have IPV6 enabled on this machine?
>> >>>
>> >>> Check out this hadoop JIRA that might be related for some tips:
>> >>> https://issues.apache.org/jira/browse/HADOOP-6056
>> >>>
>> >>> <https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
>> >>>
>> >>> On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <mjscottuic@gmail.com
>> >>> >wrote:
>> >>>
>> >>> > That's correct.  I tried a number of different ports to see if there
>> was
>> >>> > something weird, and then I shut down the hadoop server and tried to
>> >>> > connect
>> >>> > to 50010 (which of course should have been free at that point) but
>> got
>> >>> the
>> >>> > same "cannot assign to requested address" error.  If I start hadoop,
>> >>> > netstat
>> >>> > shows a process listening on 50010.
>> >>> >
>> >>> > I am going to try this on a different OS, I am wondering if FC11 is
>> my
>> >>> > problem.
>> >>> >
>> >>> > Michael
>> >>> >
>> >>> > On Tue, Sep 14, 2010 at 11:41 AM, Stack <st...@duboce.net> wrote:
>> >>> >
>> >>> > > On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
>> mjscottuic@gmail.com>
>> >>> > > wrote:
>> >>> > > > I don't see why hadoop binds
>> >>> > > > to a port but hbase does not (I even tried starting hbase with
>> >>> hadoop
>> >>> > off
>> >>> > > > and binding to 50010, which hadoop uses).
>> >>> > > >
>> >>> > >
>> >>> > > Using 50010 worked for hadoop but not for hbase?  (Odd.  We hadoop
>> >>> > > their mechanism essentially).
>> >>> > >
>> >>> > > St.Ack
>> >>> > >
>> >>> >
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Todd Lipcon
>> >>> Software Engineer, Cloudera
>> >>>
>> >>
>> >>
>> >
>>
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Michael Scott <mj...@gmail.com>.
Thanks for the continued advice.  I am still confused by the different
behaviors of hadoop and hbase. As I said before, I can't get hbase to work
on any of the ports that hadoop works on, so I guess hadoop and hbase are
using different interfaces.  Why is this, and can't I ask hbase to use the
interface that hadoop uses?  What interfaces are hadoop and hbase using?

Also (and maybe this is the wrong forum for this question), how can I get my
OS to allow me to open 60000 using the IP address?  I have temporarily
disabled selinux and iptables, as I thought that this would simply allow all
port connections. Still, this works just fine:
bash-4.0$ nc -l  60000 > /tmp/nc.out

but this does not:
bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
(returns "nc: Cannot assign requested address"; I get the same error for the
hostname instead of the IP address, and for 10.0.0.1, but 10.0.0.0 is
allowed)

I am trying to get hbase running for a socorro server, which will running
locally.  I don't know if that matters.

Thanks,
Michael

On Wed, Sep 15, 2010 at 6:04 PM, Ryan Rawson <ry...@gmail.com> wrote:

> Hey,
>
> If you bind to localhost you wont actually be reachable by anyone!
>
> The question is why is your OS disallowing binds to a specific
> interface/port combo?
>
> HBase does not really run in a blended/multihomed environment...
> meaning if you have multiple interfaces, you have to choose one that
> we work over.  This is because we need to know a singular canonical
> IP/name for any given server because we put that info up inside
> ZooKeeper and META tables.  So it's not just an artificial constraint,
> but exists for cluster management needs.
>
> Having said that, we do work on multihomed machines, eg: ec2, you
> might bind hbase to the internal interface taking advantage of the
> unmetered/faster network. Also better for security as well.
>
> Let us know if you need more background on how we use the network and why.
> -ryan
>
> On Wed, Sep 15, 2010 at 10:18 AM, Michael Scott <mj...@gmail.com>
> wrote:
> > Hi again,
> >
> > I think the hbase server master is not starting because it is attempting
> to
> > open port 60000 on its public IP address, rather than using localhost.  I
> > cannot seem to figure out how to force it (well, configure it) to attempt
> to
> > bind to localhost:60000 instead.  As far as I can see,  this is set in
> the
> > file:
> >
> > org/apache/hadoop/hbase/master/HMaster.java
> >
> > I don't know much about java, so I'd prefer not to edit the source if
> there
> > is an option, but I will if necessary.  Can someone please point me to
> the
> > way to change this setting?  Any help would be greatly appreciated.
> >
> > Thanks,
> > Michael
> >
> > On Wed, Sep 15, 2010 at 12:42 AM, Michael Scott <mjscottuic@gmail.com
> >wrote:
> >
> >> Hi again,
> >>
> >> IPV6 was enabled.  I shut it off, rebooted to be sure, verified it was
> >> still off, and encountered the same problem once again.
> >>
> >> I also tried to open port 60000 by hand with a small php file.  I can do
> >> this (as any user) for localhost.  I can NOT do this (not even as root)
> for
> >> the IP address which matches the fully qualified domain name, which is
> what
> >> hbase is trying to use.  Is there some way for me to configure hbase to
> use
> >> localhost instead of the fully qualified domain name for the master?  I
> >> would have thought this was done by default, or that there would be an
> >> obvious line in some conf file, but I can't find it.
> >>
> >> Thanks again,
> >>
> >> Michael
> >>
> >> On Tue, Sep 14, 2010 at 12:23 PM, Todd Lipcon <to...@cloudera.com>
> wrote:
> >>
> >>> Hi Michael,
> >>>
> >>> It might be related to IPV6. Do you have IPV6 enabled on this machine?
> >>>
> >>> Check out this hadoop JIRA that might be related for some tips:
> >>> https://issues.apache.org/jira/browse/HADOOP-6056
> >>>
> >>> <https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
> >>>
> >>> On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <mjscottuic@gmail.com
> >>> >wrote:
> >>>
> >>> > That's correct.  I tried a number of different ports to see if there
> was
> >>> > something weird, and then I shut down the hadoop server and tried to
> >>> > connect
> >>> > to 50010 (which of course should have been free at that point) but
> got
> >>> the
> >>> > same "cannot assign to requested address" error.  If I start hadoop,
> >>> > netstat
> >>> > shows a process listening on 50010.
> >>> >
> >>> > I am going to try this on a different OS, I am wondering if FC11 is
> my
> >>> > problem.
> >>> >
> >>> > Michael
> >>> >
> >>> > On Tue, Sep 14, 2010 at 11:41 AM, Stack <st...@duboce.net> wrote:
> >>> >
> >>> > > On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
> mjscottuic@gmail.com>
> >>> > > wrote:
> >>> > > > I don't see why hadoop binds
> >>> > > > to a port but hbase does not (I even tried starting hbase with
> >>> hadoop
> >>> > off
> >>> > > > and binding to 50010, which hadoop uses).
> >>> > > >
> >>> > >
> >>> > > Using 50010 worked for hadoop but not for hbase?  (Odd.  We hadoop
> >>> > > their mechanism essentially).
> >>> > >
> >>> > > St.Ack
> >>> > >
> >>> >
> >>>
> >>>
> >>>
> >>> --
> >>> Todd Lipcon
> >>> Software Engineer, Cloudera
> >>>
> >>
> >>
> >
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Ryan Rawson <ry...@gmail.com>.
Hey,

If you bind to localhost you wont actually be reachable by anyone!

The question is why is your OS disallowing binds to a specific
interface/port combo?

HBase does not really run in a blended/multihomed environment...
meaning if you have multiple interfaces, you have to choose one that
we work over.  This is because we need to know a singular canonical
IP/name for any given server because we put that info up inside
ZooKeeper and META tables.  So it's not just an artificial constraint,
but exists for cluster management needs.

Having said that, we do work on multihomed machines, eg: ec2, you
might bind hbase to the internal interface taking advantage of the
unmetered/faster network. Also better for security as well.

Let us know if you need more background on how we use the network and why.
-ryan

On Wed, Sep 15, 2010 at 10:18 AM, Michael Scott <mj...@gmail.com> wrote:
> Hi again,
>
> I think the hbase server master is not starting because it is attempting to
> open port 60000 on its public IP address, rather than using localhost.  I
> cannot seem to figure out how to force it (well, configure it) to attempt to
> bind to localhost:60000 instead.  As far as I can see,  this is set in the
> file:
>
> org/apache/hadoop/hbase/master/HMaster.java
>
> I don't know much about java, so I'd prefer not to edit the source if there
> is an option, but I will if necessary.  Can someone please point me to the
> way to change this setting?  Any help would be greatly appreciated.
>
> Thanks,
> Michael
>
> On Wed, Sep 15, 2010 at 12:42 AM, Michael Scott <mj...@gmail.com>wrote:
>
>> Hi again,
>>
>> IPV6 was enabled.  I shut it off, rebooted to be sure, verified it was
>> still off, and encountered the same problem once again.
>>
>> I also tried to open port 60000 by hand with a small php file.  I can do
>> this (as any user) for localhost.  I can NOT do this (not even as root) for
>> the IP address which matches the fully qualified domain name, which is what
>> hbase is trying to use.  Is there some way for me to configure hbase to use
>> localhost instead of the fully qualified domain name for the master?  I
>> would have thought this was done by default, or that there would be an
>> obvious line in some conf file, but I can't find it.
>>
>> Thanks again,
>>
>> Michael
>>
>> On Tue, Sep 14, 2010 at 12:23 PM, Todd Lipcon <to...@cloudera.com> wrote:
>>
>>> Hi Michael,
>>>
>>> It might be related to IPV6. Do you have IPV6 enabled on this machine?
>>>
>>> Check out this hadoop JIRA that might be related for some tips:
>>> https://issues.apache.org/jira/browse/HADOOP-6056
>>>
>>> <https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
>>>
>>> On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <mjscottuic@gmail.com
>>> >wrote:
>>>
>>> > That's correct.  I tried a number of different ports to see if there was
>>> > something weird, and then I shut down the hadoop server and tried to
>>> > connect
>>> > to 50010 (which of course should have been free at that point) but got
>>> the
>>> > same "cannot assign to requested address" error.  If I start hadoop,
>>> > netstat
>>> > shows a process listening on 50010.
>>> >
>>> > I am going to try this on a different OS, I am wondering if FC11 is my
>>> > problem.
>>> >
>>> > Michael
>>> >
>>> > On Tue, Sep 14, 2010 at 11:41 AM, Stack <st...@duboce.net> wrote:
>>> >
>>> > > On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <mj...@gmail.com>
>>> > > wrote:
>>> > > > I don't see why hadoop binds
>>> > > > to a port but hbase does not (I even tried starting hbase with
>>> hadoop
>>> > off
>>> > > > and binding to 50010, which hadoop uses).
>>> > > >
>>> > >
>>> > > Using 50010 worked for hadoop but not for hbase?  (Odd.  We hadoop
>>> > > their mechanism essentially).
>>> > >
>>> > > St.Ack
>>> > >
>>> >
>>>
>>>
>>>
>>> --
>>> Todd Lipcon
>>> Software Engineer, Cloudera
>>>
>>
>>
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Michael Scott <mj...@gmail.com>.
Hi again,

I think the hbase server master is not starting because it is attempting to
open port 60000 on its public IP address, rather than using localhost.  I
cannot seem to figure out how to force it (well, configure it) to attempt to
bind to localhost:60000 instead.  As far as I can see,  this is set in the
file:

org/apache/hadoop/hbase/master/HMaster.java

I don't know much about java, so I'd prefer not to edit the source if there
is an option, but I will if necessary.  Can someone please point me to the
way to change this setting?  Any help would be greatly appreciated.

Thanks,
Michael

On Wed, Sep 15, 2010 at 12:42 AM, Michael Scott <mj...@gmail.com>wrote:

> Hi again,
>
> IPV6 was enabled.  I shut it off, rebooted to be sure, verified it was
> still off, and encountered the same problem once again.
>
> I also tried to open port 60000 by hand with a small php file.  I can do
> this (as any user) for localhost.  I can NOT do this (not even as root) for
> the IP address which matches the fully qualified domain name, which is what
> hbase is trying to use.  Is there some way for me to configure hbase to use
> localhost instead of the fully qualified domain name for the master?  I
> would have thought this was done by default, or that there would be an
> obvious line in some conf file, but I can't find it.
>
> Thanks again,
>
> Michael
>
> On Tue, Sep 14, 2010 at 12:23 PM, Todd Lipcon <to...@cloudera.com> wrote:
>
>> Hi Michael,
>>
>> It might be related to IPV6. Do you have IPV6 enabled on this machine?
>>
>> Check out this hadoop JIRA that might be related for some tips:
>> https://issues.apache.org/jira/browse/HADOOP-6056
>>
>> <https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
>>
>> On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <mjscottuic@gmail.com
>> >wrote:
>>
>> > That's correct.  I tried a number of different ports to see if there was
>> > something weird, and then I shut down the hadoop server and tried to
>> > connect
>> > to 50010 (which of course should have been free at that point) but got
>> the
>> > same "cannot assign to requested address" error.  If I start hadoop,
>> > netstat
>> > shows a process listening on 50010.
>> >
>> > I am going to try this on a different OS, I am wondering if FC11 is my
>> > problem.
>> >
>> > Michael
>> >
>> > On Tue, Sep 14, 2010 at 11:41 AM, Stack <st...@duboce.net> wrote:
>> >
>> > > On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <mj...@gmail.com>
>> > > wrote:
>> > > > I don't see why hadoop binds
>> > > > to a port but hbase does not (I even tried starting hbase with
>> hadoop
>> > off
>> > > > and binding to 50010, which hadoop uses).
>> > > >
>> > >
>> > > Using 50010 worked for hadoop but not for hbase?  (Odd.  We hadoop
>> > > their mechanism essentially).
>> > >
>> > > St.Ack
>> > >
>> >
>>
>>
>>
>> --
>> Todd Lipcon
>> Software Engineer, Cloudera
>>
>
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Michael Scott <mj...@gmail.com>.
Hi again,

IPV6 was enabled.  I shut it off, rebooted to be sure, verified it was still
off, and encountered the same problem once again.

I also tried to open port 60000 by hand with a small php file.  I can do
this (as any user) for localhost.  I can NOT do this (not even as root) for
the IP address which matches the fully qualified domain name, which is what
hbase is trying to use.  Is there some way for me to configure hbase to use
localhost instead of the fully qualified domain name for the master?  I
would have thought this was done by default, or that there would be an
obvious line in some conf file, but I can't find it.

Thanks again,

Michael

On Tue, Sep 14, 2010 at 12:23 PM, Todd Lipcon <to...@cloudera.com> wrote:

> Hi Michael,
>
> It might be related to IPV6. Do you have IPV6 enabled on this machine?
>
> Check out this hadoop JIRA that might be related for some tips:
> https://issues.apache.org/jira/browse/HADOOP-6056
>
> <https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
>
> On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <mjscottuic@gmail.com
> >wrote:
>
> > That's correct.  I tried a number of different ports to see if there was
> > something weird, and then I shut down the hadoop server and tried to
> > connect
> > to 50010 (which of course should have been free at that point) but got
> the
> > same "cannot assign to requested address" error.  If I start hadoop,
> > netstat
> > shows a process listening on 50010.
> >
> > I am going to try this on a different OS, I am wondering if FC11 is my
> > problem.
> >
> > Michael
> >
> > On Tue, Sep 14, 2010 at 11:41 AM, Stack <st...@duboce.net> wrote:
> >
> > > On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <mj...@gmail.com>
> > > wrote:
> > > > I don't see why hadoop binds
> > > > to a port but hbase does not (I even tried starting hbase with hadoop
> > off
> > > > and binding to 50010, which hadoop uses).
> > > >
> > >
> > > Using 50010 worked for hadoop but not for hbase?  (Odd.  We hadoop
> > > their mechanism essentially).
> > >
> > > St.Ack
> > >
> >
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Todd Lipcon <to...@cloudera.com>.
Hi Michael,

It might be related to IPV6. Do you have IPV6 enabled on this machine?

Check out this hadoop JIRA that might be related for some tips:
https://issues.apache.org/jira/browse/HADOOP-6056

<https://issues.apache.org/jira/browse/HADOOP-6056>-Todd

On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <mj...@gmail.com>wrote:

> That's correct.  I tried a number of different ports to see if there was
> something weird, and then I shut down the hadoop server and tried to
> connect
> to 50010 (which of course should have been free at that point) but got the
> same "cannot assign to requested address" error.  If I start hadoop,
> netstat
> shows a process listening on 50010.
>
> I am going to try this on a different OS, I am wondering if FC11 is my
> problem.
>
> Michael
>
> On Tue, Sep 14, 2010 at 11:41 AM, Stack <st...@duboce.net> wrote:
>
> > On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <mj...@gmail.com>
> > wrote:
> > > I don't see why hadoop binds
> > > to a port but hbase does not (I even tried starting hbase with hadoop
> off
> > > and binding to 50010, which hadoop uses).
> > >
> >
> > Using 50010 worked for hadoop but not for hbase?  (Odd.  We hadoop
> > their mechanism essentially).
> >
> > St.Ack
> >
>



-- 
Todd Lipcon
Software Engineer, Cloudera

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Michael Scott <mj...@gmail.com>.
That's correct.  I tried a number of different ports to see if there was
something weird, and then I shut down the hadoop server and tried to connect
to 50010 (which of course should have been free at that point) but got the
same "cannot assign to requested address" error.  If I start hadoop, netstat
shows a process listening on 50010.

I am going to try this on a different OS, I am wondering if FC11 is my
problem.

Michael

On Tue, Sep 14, 2010 at 11:41 AM, Stack <st...@duboce.net> wrote:

> On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <mj...@gmail.com>
> wrote:
> > I don't see why hadoop binds
> > to a port but hbase does not (I even tried starting hbase with hadoop off
> > and binding to 50010, which hadoop uses).
> >
>
> Using 50010 worked for hadoop but not for hbase?  (Odd.  We hadoop
> their mechanism essentially).
>
> St.Ack
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Stack <st...@duboce.net>.
On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <mj...@gmail.com> wrote:
> I don't see why hadoop binds
> to a port but hbase does not (I even tried starting hbase with hadoop off
> and binding to 50010, which hadoop uses).
>

Using 50010 worked for hadoop but not for hbase?  (Odd.  We hadoop
their mechanism essentially).

St.Ack

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Michael Scott <mj...@gmail.com>.
Thanks again.  Don't worry, we're not exposing to the outside world, I was
just clarifying that the IP address exists and takes connections, both
internal and external, on other ports.  I will see if I can figure out why
it is choking on the 60000 port.  I'm not much of on expert on this, I know
to look in netstat but that's about it.  I'll report back with what I find.
I have tried editing the port number in hbase-site.xml, and it won't bind to
any other port either.  The error message for 60000 and other non-privileged
ports not in use is the same as the error message I get if I try to bind to
a port that I know is taken, like 50010.  If I try a low-numbered privileged
port then I get "Permission denied" instead.  I don't see why hadoop binds
to a port but hbase does not (I even tried starting hbase with hadoop off
and binding to 50010, which hadoop uses).

Michael

On Tue, Sep 14, 2010 at 1:11 AM, Ryan Rawson <ry...@gmail.com> wrote:

> i wouldnt expose either hadoop or hbase to the outside world! It's
> pretty trivial to oom a server with data to the port.  hardening the
> port just hasnt been a priority yet.
>
> but the log message suggests either a port issue, or a IP issue...
> perhaps you can dig a little bit more and let us know what you find?
>
> -ryan
>
> On Mon, Sep 13, 2010 at 10:50 PM, Michael Scott <mj...@gmail.com>
> wrote:
> > The IP is a static address through comcast, and we point gslbiotech.comto
> > it as well (http works with hostname or IP number, so I think the IP
> > interface is live).  I don't know if that leading / means anything.  Note
> > that hadoop binds just fine to the 500XX ports on that IP.
> >
> > Michael
> >
> > On Tue, Sep 14, 2010 at 12:41 AM, Ryan Rawson <ry...@gmail.com>
> wrote:
> >
> >> dur my mistake look at this line:
> >>
> >> java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot
> >>
> >> do you have an interface for that IP?
> >>
> >> we use the hostname to find the IP and then bind to that IP.
> >>
> >> -ryan
> >>
> >> On Mon, Sep 13, 2010 at 10:36 PM, Michael Scott <mj...@gmail.com>
> >> wrote:
> >> > I wish it were so, but no port 600XX is in use:
> >> >
> >> > [root]# netstat -anp | grep 600
> >> > unix  3      [ ]         STREAM     CONNECTED     8600
> >> 1480/avahi-daemon:
> >> >
> >> >
> >> > thanks,
> >> > Michael
> >> >
> >> > On Tue, Sep 14, 2010 at 12:22 AM, Ryan Rawson <ry...@gmail.com>
> >> wrote:
> >> >
> >> >> you can use:
> >> >>
> >> >> netstat -anp
> >> >>
> >> >> to figure out which process is using port 60000.
> >> >>
> >> >> -ryan
> >> >>
> >> >> On Mon, Sep 13, 2010 at 10:16 PM, Michael Scott <
> mjscottuic@gmail.com>
> >> >> wrote:
> >> >> > Hi,
> >> >> >
> >> >> > I am trying to install a standalone hbase server on Fedora Core 11.
>  I
> >> >> have
> >> >> > hadoop running:
> >> >> >
> >> >> > bash-4.0$ jps
> >> >> > 30908 JobTracker
> >> >> > 30631 NameNode
> >> >> > 30824 SecondaryNameNode
> >> >> > 30731 DataNode
> >> >> > 30987 TaskTracker
> >> >> > 31137 Jps
> >> >> >
> >> >> > The only edit I have made to the hbase-0.20.6 directory from the
> >> tarball
> >> >> is
> >> >> > to point to the Java installation (the same as used by hadoop):
> >> >> > export JAVA_HOME=/usr/lib/jvm/java-1.6.0-sun/
> >> >> >
> >> >> > I have verified sshd passwordless login for hadoop for all
> variations
> >> of
> >> >> the
> >> >> > hostname (localhost, qualifiedname.com, www.qualifiedname.com,
> >> straight
> >> >> IP
> >> >> > address), and have added the qualified hostnames to /etc/hosts just
> to
> >> be
> >> >> > sure.
> >> >> >
> >> >> > When I attempt to start the hbase server with start-hbase.sh (as
> >> hadoop)
> >> >> the
> >> >> > following appears in the log file:
> >> >> >
> >> >> > 2010-09-14 00:36:45,555 INFO
> org.apache.hadoop.hbase.master.HMaster:
> >> My
> >> >> > address is qualifiedname.com:60000
> >> >> > 2010-09-14 00:36:45,682 ERROR
> org.apache.hadoop.hbase.master.HMaster:
> >> Can
> >> >> > not start master
> >> >> > java.net.BindException: Problem binding to /97.86.88.18:60000 :
> >> Cannot
> >> >> > assign requested address
> >> >> >        at
> >> >> > org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:179)
> >> >> >        at
> >> >> >
> >> >>
> >>
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>(HBaseServer.java:242)
> >> >> >        at
> >> >> >
> org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:998)
> >> >> >        at
> >> >> >
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>(HBaseRPC.java:637)
> >> >> >        at
> >> >> org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:596)
> >> >> >        at
> >> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:224)
> >> >> >        at
> >> >> >
> >> >>
> >>
> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
> >> >> >        at
> >> >> >
> >> >>
> >>
> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
> >> >> >        at
> >> >> org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
> >> >> >        at
> >> org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
> >> >> > Caused by: java.net.BindException: Cannot assign requested address
> >> >> >        at sun.nio.ch.Net.bind(Native Method)
> >> >> >        at
> >> >> >
> >>
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
> >> >> >        at
> >> >> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
> >> >> >        at
> >> >> > org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:177)
> >> >> >        ... 9 more
> >> >> >
> >> >> > At this point zookeeper is apparently running, but hbase master is
> >> not:
> >> >> > bash-4.0$ jps
> >> >> > 31454 HQuorumPeer
> >> >> > 30908 JobTracker
> >> >> > 30631 NameNode
> >> >> > 30824 SecondaryNameNode
> >> >> > 30731 DataNode
> >> >> > 31670 Jps
> >> >> > 30987 TaskTracker
> >> >> >
> >> >> > I am stumped -- the documentation simply says that the standalone
> >> server
> >> >> > should work out of the box, and it would seem to me that  hadoop.
> >>  Does
> >> >> > anyone have any suggestions here?  Thanks in advance!
> >> >> >
> >> >> > Michael
> >> >> >
> >> >> > Michael
> >> >> >
> >> >>
> >> >
> >>
> >
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Ryan Rawson <ry...@gmail.com>.
i wouldnt expose either hadoop or hbase to the outside world! It's
pretty trivial to oom a server with data to the port.  hardening the
port just hasnt been a priority yet.

but the log message suggests either a port issue, or a IP issue...
perhaps you can dig a little bit more and let us know what you find?

-ryan

On Mon, Sep 13, 2010 at 10:50 PM, Michael Scott <mj...@gmail.com> wrote:
> The IP is a static address through comcast, and we point gslbiotech.com to
> it as well (http works with hostname or IP number, so I think the IP
> interface is live).  I don't know if that leading / means anything.  Note
> that hadoop binds just fine to the 500XX ports on that IP.
>
> Michael
>
> On Tue, Sep 14, 2010 at 12:41 AM, Ryan Rawson <ry...@gmail.com> wrote:
>
>> dur my mistake look at this line:
>>
>> java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot
>>
>> do you have an interface for that IP?
>>
>> we use the hostname to find the IP and then bind to that IP.
>>
>> -ryan
>>
>> On Mon, Sep 13, 2010 at 10:36 PM, Michael Scott <mj...@gmail.com>
>> wrote:
>> > I wish it were so, but no port 600XX is in use:
>> >
>> > [root]# netstat -anp | grep 600
>> > unix  3      [ ]         STREAM     CONNECTED     8600
>> 1480/avahi-daemon:
>> >
>> >
>> > thanks,
>> > Michael
>> >
>> > On Tue, Sep 14, 2010 at 12:22 AM, Ryan Rawson <ry...@gmail.com>
>> wrote:
>> >
>> >> you can use:
>> >>
>> >> netstat -anp
>> >>
>> >> to figure out which process is using port 60000.
>> >>
>> >> -ryan
>> >>
>> >> On Mon, Sep 13, 2010 at 10:16 PM, Michael Scott <mj...@gmail.com>
>> >> wrote:
>> >> > Hi,
>> >> >
>> >> > I am trying to install a standalone hbase server on Fedora Core 11.  I
>> >> have
>> >> > hadoop running:
>> >> >
>> >> > bash-4.0$ jps
>> >> > 30908 JobTracker
>> >> > 30631 NameNode
>> >> > 30824 SecondaryNameNode
>> >> > 30731 DataNode
>> >> > 30987 TaskTracker
>> >> > 31137 Jps
>> >> >
>> >> > The only edit I have made to the hbase-0.20.6 directory from the
>> tarball
>> >> is
>> >> > to point to the Java installation (the same as used by hadoop):
>> >> > export JAVA_HOME=/usr/lib/jvm/java-1.6.0-sun/
>> >> >
>> >> > I have verified sshd passwordless login for hadoop for all variations
>> of
>> >> the
>> >> > hostname (localhost, qualifiedname.com, www.qualifiedname.com,
>> straight
>> >> IP
>> >> > address), and have added the qualified hostnames to /etc/hosts just to
>> be
>> >> > sure.
>> >> >
>> >> > When I attempt to start the hbase server with start-hbase.sh (as
>> hadoop)
>> >> the
>> >> > following appears in the log file:
>> >> >
>> >> > 2010-09-14 00:36:45,555 INFO org.apache.hadoop.hbase.master.HMaster:
>> My
>> >> > address is qualifiedname.com:60000
>> >> > 2010-09-14 00:36:45,682 ERROR org.apache.hadoop.hbase.master.HMaster:
>> Can
>> >> > not start master
>> >> > java.net.BindException: Problem binding to /97.86.88.18:60000 :
>> Cannot
>> >> > assign requested address
>> >> >        at
>> >> > org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:179)
>> >> >        at
>> >> >
>> >>
>> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>(HBaseServer.java:242)
>> >> >        at
>> >> > org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:998)
>> >> >        at
>> >> > org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>(HBaseRPC.java:637)
>> >> >        at
>> >> org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:596)
>> >> >        at
>> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:224)
>> >> >        at
>> >> >
>> >>
>> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
>> >> >        at
>> >> >
>> >>
>> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
>> >> >        at
>> >> org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
>> >> >        at
>> org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
>> >> > Caused by: java.net.BindException: Cannot assign requested address
>> >> >        at sun.nio.ch.Net.bind(Native Method)
>> >> >        at
>> >> >
>> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
>> >> >        at
>> >> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>> >> >        at
>> >> > org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:177)
>> >> >        ... 9 more
>> >> >
>> >> > At this point zookeeper is apparently running, but hbase master is
>> not:
>> >> > bash-4.0$ jps
>> >> > 31454 HQuorumPeer
>> >> > 30908 JobTracker
>> >> > 30631 NameNode
>> >> > 30824 SecondaryNameNode
>> >> > 30731 DataNode
>> >> > 31670 Jps
>> >> > 30987 TaskTracker
>> >> >
>> >> > I am stumped -- the documentation simply says that the standalone
>> server
>> >> > should work out of the box, and it would seem to me that  hadoop.
>>  Does
>> >> > anyone have any suggestions here?  Thanks in advance!
>> >> >
>> >> > Michael
>> >> >
>> >> > Michael
>> >> >
>> >>
>> >
>>
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Michael Scott <mj...@gmail.com>.
The IP is a static address through comcast, and we point gslbiotech.com to
it as well (http works with hostname or IP number, so I think the IP
interface is live).  I don't know if that leading / means anything.  Note
that hadoop binds just fine to the 500XX ports on that IP.

Michael

On Tue, Sep 14, 2010 at 12:41 AM, Ryan Rawson <ry...@gmail.com> wrote:

> dur my mistake look at this line:
>
> java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot
>
> do you have an interface for that IP?
>
> we use the hostname to find the IP and then bind to that IP.
>
> -ryan
>
> On Mon, Sep 13, 2010 at 10:36 PM, Michael Scott <mj...@gmail.com>
> wrote:
> > I wish it were so, but no port 600XX is in use:
> >
> > [root]# netstat -anp | grep 600
> > unix  3      [ ]         STREAM     CONNECTED     8600
> 1480/avahi-daemon:
> >
> >
> > thanks,
> > Michael
> >
> > On Tue, Sep 14, 2010 at 12:22 AM, Ryan Rawson <ry...@gmail.com>
> wrote:
> >
> >> you can use:
> >>
> >> netstat -anp
> >>
> >> to figure out which process is using port 60000.
> >>
> >> -ryan
> >>
> >> On Mon, Sep 13, 2010 at 10:16 PM, Michael Scott <mj...@gmail.com>
> >> wrote:
> >> > Hi,
> >> >
> >> > I am trying to install a standalone hbase server on Fedora Core 11.  I
> >> have
> >> > hadoop running:
> >> >
> >> > bash-4.0$ jps
> >> > 30908 JobTracker
> >> > 30631 NameNode
> >> > 30824 SecondaryNameNode
> >> > 30731 DataNode
> >> > 30987 TaskTracker
> >> > 31137 Jps
> >> >
> >> > The only edit I have made to the hbase-0.20.6 directory from the
> tarball
> >> is
> >> > to point to the Java installation (the same as used by hadoop):
> >> > export JAVA_HOME=/usr/lib/jvm/java-1.6.0-sun/
> >> >
> >> > I have verified sshd passwordless login for hadoop for all variations
> of
> >> the
> >> > hostname (localhost, qualifiedname.com, www.qualifiedname.com,
> straight
> >> IP
> >> > address), and have added the qualified hostnames to /etc/hosts just to
> be
> >> > sure.
> >> >
> >> > When I attempt to start the hbase server with start-hbase.sh (as
> hadoop)
> >> the
> >> > following appears in the log file:
> >> >
> >> > 2010-09-14 00:36:45,555 INFO org.apache.hadoop.hbase.master.HMaster:
> My
> >> > address is qualifiedname.com:60000
> >> > 2010-09-14 00:36:45,682 ERROR org.apache.hadoop.hbase.master.HMaster:
> Can
> >> > not start master
> >> > java.net.BindException: Problem binding to /97.86.88.18:60000 :
> Cannot
> >> > assign requested address
> >> >        at
> >> > org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:179)
> >> >        at
> >> >
> >>
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>(HBaseServer.java:242)
> >> >        at
> >> > org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:998)
> >> >        at
> >> > org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>(HBaseRPC.java:637)
> >> >        at
> >> org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:596)
> >> >        at
> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:224)
> >> >        at
> >> >
> >>
> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
> >> >        at
> >> >
> >>
> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
> >> >        at
> >> org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
> >> >        at
> org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
> >> > Caused by: java.net.BindException: Cannot assign requested address
> >> >        at sun.nio.ch.Net.bind(Native Method)
> >> >        at
> >> >
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
> >> >        at
> >> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
> >> >        at
> >> > org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:177)
> >> >        ... 9 more
> >> >
> >> > At this point zookeeper is apparently running, but hbase master is
> not:
> >> > bash-4.0$ jps
> >> > 31454 HQuorumPeer
> >> > 30908 JobTracker
> >> > 30631 NameNode
> >> > 30824 SecondaryNameNode
> >> > 30731 DataNode
> >> > 31670 Jps
> >> > 30987 TaskTracker
> >> >
> >> > I am stumped -- the documentation simply says that the standalone
> server
> >> > should work out of the box, and it would seem to me that  hadoop.
>  Does
> >> > anyone have any suggestions here?  Thanks in advance!
> >> >
> >> > Michael
> >> >
> >> > Michael
> >> >
> >>
> >
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Ryan Rawson <ry...@gmail.com>.
dur my mistake look at this line:

java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot

do you have an interface for that IP?

we use the hostname to find the IP and then bind to that IP.

-ryan

On Mon, Sep 13, 2010 at 10:36 PM, Michael Scott <mj...@gmail.com> wrote:
> I wish it were so, but no port 600XX is in use:
>
> [root]# netstat -anp | grep 600
> unix  3      [ ]         STREAM     CONNECTED     8600   1480/avahi-daemon:
>
>
> thanks,
> Michael
>
> On Tue, Sep 14, 2010 at 12:22 AM, Ryan Rawson <ry...@gmail.com> wrote:
>
>> you can use:
>>
>> netstat -anp
>>
>> to figure out which process is using port 60000.
>>
>> -ryan
>>
>> On Mon, Sep 13, 2010 at 10:16 PM, Michael Scott <mj...@gmail.com>
>> wrote:
>> > Hi,
>> >
>> > I am trying to install a standalone hbase server on Fedora Core 11.  I
>> have
>> > hadoop running:
>> >
>> > bash-4.0$ jps
>> > 30908 JobTracker
>> > 30631 NameNode
>> > 30824 SecondaryNameNode
>> > 30731 DataNode
>> > 30987 TaskTracker
>> > 31137 Jps
>> >
>> > The only edit I have made to the hbase-0.20.6 directory from the tarball
>> is
>> > to point to the Java installation (the same as used by hadoop):
>> > export JAVA_HOME=/usr/lib/jvm/java-1.6.0-sun/
>> >
>> > I have verified sshd passwordless login for hadoop for all variations of
>> the
>> > hostname (localhost, qualifiedname.com, www.qualifiedname.com, straight
>> IP
>> > address), and have added the qualified hostnames to /etc/hosts just to be
>> > sure.
>> >
>> > When I attempt to start the hbase server with start-hbase.sh (as hadoop)
>> the
>> > following appears in the log file:
>> >
>> > 2010-09-14 00:36:45,555 INFO org.apache.hadoop.hbase.master.HMaster: My
>> > address is qualifiedname.com:60000
>> > 2010-09-14 00:36:45,682 ERROR org.apache.hadoop.hbase.master.HMaster: Can
>> > not start master
>> > java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot
>> > assign requested address
>> >        at
>> > org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:179)
>> >        at
>> >
>> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>(HBaseServer.java:242)
>> >        at
>> > org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:998)
>> >        at
>> > org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>(HBaseRPC.java:637)
>> >        at
>> org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:596)
>> >        at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:224)
>> >        at
>> >
>> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
>> >        at
>> >
>> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
>> >        at
>> org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
>> >        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
>> > Caused by: java.net.BindException: Cannot assign requested address
>> >        at sun.nio.ch.Net.bind(Native Method)
>> >        at
>> > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
>> >        at
>> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>> >        at
>> > org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:177)
>> >        ... 9 more
>> >
>> > At this point zookeeper is apparently running, but hbase master is not:
>> > bash-4.0$ jps
>> > 31454 HQuorumPeer
>> > 30908 JobTracker
>> > 30631 NameNode
>> > 30824 SecondaryNameNode
>> > 30731 DataNode
>> > 31670 Jps
>> > 30987 TaskTracker
>> >
>> > I am stumped -- the documentation simply says that the standalone server
>> > should work out of the box, and it would seem to me that  hadoop.  Does
>> > anyone have any suggestions here?  Thanks in advance!
>> >
>> > Michael
>> >
>> > Michael
>> >
>>
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Michael Scott <mj...@gmail.com>.
I wish it were so, but no port 600XX is in use:

[root]# netstat -anp | grep 600
unix  3      [ ]         STREAM     CONNECTED     8600   1480/avahi-daemon:


thanks,
Michael

On Tue, Sep 14, 2010 at 12:22 AM, Ryan Rawson <ry...@gmail.com> wrote:

> you can use:
>
> netstat -anp
>
> to figure out which process is using port 60000.
>
> -ryan
>
> On Mon, Sep 13, 2010 at 10:16 PM, Michael Scott <mj...@gmail.com>
> wrote:
> > Hi,
> >
> > I am trying to install a standalone hbase server on Fedora Core 11.  I
> have
> > hadoop running:
> >
> > bash-4.0$ jps
> > 30908 JobTracker
> > 30631 NameNode
> > 30824 SecondaryNameNode
> > 30731 DataNode
> > 30987 TaskTracker
> > 31137 Jps
> >
> > The only edit I have made to the hbase-0.20.6 directory from the tarball
> is
> > to point to the Java installation (the same as used by hadoop):
> > export JAVA_HOME=/usr/lib/jvm/java-1.6.0-sun/
> >
> > I have verified sshd passwordless login for hadoop for all variations of
> the
> > hostname (localhost, qualifiedname.com, www.qualifiedname.com, straight
> IP
> > address), and have added the qualified hostnames to /etc/hosts just to be
> > sure.
> >
> > When I attempt to start the hbase server with start-hbase.sh (as hadoop)
> the
> > following appears in the log file:
> >
> > 2010-09-14 00:36:45,555 INFO org.apache.hadoop.hbase.master.HMaster: My
> > address is qualifiedname.com:60000
> > 2010-09-14 00:36:45,682 ERROR org.apache.hadoop.hbase.master.HMaster: Can
> > not start master
> > java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot
> > assign requested address
> >        at
> > org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:179)
> >        at
> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>(HBaseServer.java:242)
> >        at
> > org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:998)
> >        at
> > org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>(HBaseRPC.java:637)
> >        at
> org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:596)
> >        at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:224)
> >        at
> >
> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
> >        at
> >
> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
> >        at
> org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
> >        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
> > Caused by: java.net.BindException: Cannot assign requested address
> >        at sun.nio.ch.Net.bind(Native Method)
> >        at
> > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
> >        at
> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
> >        at
> > org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:177)
> >        ... 9 more
> >
> > At this point zookeeper is apparently running, but hbase master is not:
> > bash-4.0$ jps
> > 31454 HQuorumPeer
> > 30908 JobTracker
> > 30631 NameNode
> > 30824 SecondaryNameNode
> > 30731 DataNode
> > 31670 Jps
> > 30987 TaskTracker
> >
> > I am stumped -- the documentation simply says that the standalone server
> > should work out of the box, and it would seem to me that  hadoop.  Does
> > anyone have any suggestions here?  Thanks in advance!
> >
> > Michael
> >
> > Michael
> >
>

Re: hbase standalone cannot start master, cannot assign requested address at port 60000

Posted by Ryan Rawson <ry...@gmail.com>.
you can use:

netstat -anp

to figure out which process is using port 60000.

-ryan

On Mon, Sep 13, 2010 at 10:16 PM, Michael Scott <mj...@gmail.com> wrote:
> Hi,
>
> I am trying to install a standalone hbase server on Fedora Core 11.  I have
> hadoop running:
>
> bash-4.0$ jps
> 30908 JobTracker
> 30631 NameNode
> 30824 SecondaryNameNode
> 30731 DataNode
> 30987 TaskTracker
> 31137 Jps
>
> The only edit I have made to the hbase-0.20.6 directory from the tarball is
> to point to the Java installation (the same as used by hadoop):
> export JAVA_HOME=/usr/lib/jvm/java-1.6.0-sun/
>
> I have verified sshd passwordless login for hadoop for all variations of the
> hostname (localhost, qualifiedname.com, www.qualifiedname.com, straight IP
> address), and have added the qualified hostnames to /etc/hosts just to be
> sure.
>
> When I attempt to start the hbase server with start-hbase.sh (as hadoop) the
> following appears in the log file:
>
> 2010-09-14 00:36:45,555 INFO org.apache.hadoop.hbase.master.HMaster: My
> address is qualifiedname.com:60000
> 2010-09-14 00:36:45,682 ERROR org.apache.hadoop.hbase.master.HMaster: Can
> not start master
> java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot
> assign requested address
>        at
> org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:179)
>        at
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>(HBaseServer.java:242)
>        at
> org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:998)
>        at
> org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>(HBaseRPC.java:637)
>        at org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:596)
>        at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:224)
>        at
> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
>        at
> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
>        at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
>        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
> Caused by: java.net.BindException: Cannot assign requested address
>        at sun.nio.ch.Net.bind(Native Method)
>        at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
>        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>        at
> org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:177)
>        ... 9 more
>
> At this point zookeeper is apparently running, but hbase master is not:
> bash-4.0$ jps
> 31454 HQuorumPeer
> 30908 JobTracker
> 30631 NameNode
> 30824 SecondaryNameNode
> 30731 DataNode
> 31670 Jps
> 30987 TaskTracker
>
> I am stumped -- the documentation simply says that the standalone server
> should work out of the box, and it would seem to me that  hadoop.  Does
> anyone have any suggestions here?  Thanks in advance!
>
> Michael
>
> Michael
>