You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Rahul Mehta <ra...@gmail.com> on 2011/12/05 08:08:41 UTC

hbase master is not starting @ 60010 on new ubutnu 11.04 system

I am trying to install hbase on my local system ubutnu 11.04 .

what i have done to do that is :

   1. SSH Configuration
      1. sh-keygen -t rsa
      2. do the enter enter when it ask something.
      3. cd ~/.ssh
      4. sudo apt-get install openssh-server
      5. ssh localhost
   2. Installing and configuring Zookeeper
      1. wget
      http://archive.cloudera.com/cdh/3/zookeeper-3.3.3-cdh3u1.tar.gz
      2. tar xzvf
zookeeper-3.3.3-cdh3u1.tar.gz<http://archive.cloudera.com/cdh/3/zookeeper-3.3.3-cdh3u1.tar.gz>
      3. mv zoo.cfg zoo1.cfg (in conf of zookeeper)
      4. cp zoo1.cfg zoo2.cfg
      5. cp zoo1.cfg zoo3.cfg
      6. Make the three data directory for each config file , i made
      /home/rahul/oodebesetup/data/zookeeper/data1 , data2 data3 resp. Change
      dataDir for all three config file respectively.
      7. make myid file in each data directory and  edit file respective
      1,2 and 3 the id of server.
      8. add this in all  three config  file at below
         1. server.1=localhost:2878:3878
         2. server.2=localhost:2879:3879
         3. server.3=localhost:2880:3880
      9. change Clientport of 1:2181 , 2: 2182 , 3 :2183 resp.
      10. ./bin/zkServer.sh start <Zookeeper configuration file specified
      in conf directory e.g. zoo1.cfg>
      11. can check with jps command  there should be these processes
         1. 3466 QuorumPeerMain
         2. 3399 QuorumPeerMain
         3. 3426 QuorumPeerMain



   1. Installing and configuring Hadoop
      1. wget http://archive.cloudera.com/cdh/3/hadoop-0.20.2-cdh3u1.tar.gz
      2. tar xzvf hadoop-0.20.2-cdh3u1.tar.gz
      3. create data directory for hadoop with inside folder hdfs .
      4. edit  <HADOOP_HOME>/conf/core_sites.xml


<property>

 <name>fs.default.name</name>

 <value>hdfs://localhost:9000</value>

 <description>This is the namenode uri</description>

</property>

 <property>

 <name>hadoop.tmp.dir</name>

 <value>/home/rahul/oodebesetup/data/hadoop/hdfs</value>

 <description>This is the namenode uri / directory</description>

</property>

   1. edit  <HADOOP_HOME>/conf/hdfs-site.xml


<property>

              <name>dfs.replication</name>

     <value>1</value>

    <description>Default block replication.The actual number of
replications can be specified when the file is created. The default is used
if replication is not specified in create time. </description>

</property>

   1. edit  <HADOOP_HOME>/conf/mapred-site.xml


<property>

<name>mapred.job.tracker</name>

<value>localhost:9001</value>

<description>The host and port that the MapReduce job tracker runs at. If
"local", then jobs are run in-process as a single map and reduce
task.</description>

</property>

   1. edit <HADOOP_HOME>/conf/hadoop-env.sh
         1. export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
      2. Formatting the namenode
         1. <HADOOP_HOME>/bin/hadoop namenode -format
      3.  Start Hadoop server
         1. <HADOOP_HOME>/bin/start-all.sh
      4. check with jps there should be these process.
         1. 10119 DataNode
         2. 10413 JobTracker
         3. 10338 SecondaryNameNode
         4. 10625 TaskTracker
         5. 9897 NameNode
      1. Installing and configuring HBase
      1. wget http://archive.cloudera.com/cdh/3/hbase-0.90.3-cdh3u1.tar.gz
      2. tar xzvf hbase-0.90.3-cdh3u1.tar.gz
      3. edit <HBASE_HOME>/conf/hbase_site.xml


   <property>

<name>hbase.master</name>

<value>localhost:60000</value>

<description>The host and port that the HBase master runs at.A value of
'local' runs the master and a regionserver in a single
process.</description>

</property>

<property>

<name>hbase.rootdir</name>

<value>hdfs://localhost:9000/hdfs</value>

<description>The directory shared by region servers.</description>

</property>

<property>

<name>hbase.cluster.distributed</name>

<value>true</value>

<description>The mode the cluster will be in. Possible values are false:
standalone and pseudo-distributed setups with managed Zookeeper true:
fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)

</description>

</property>

<property>

<name>hbase.zookeeper.property.clientPort</name>

<value>2181</value>

<description>Property from ZooKeeper's config zoo.cfg.The port at which the
clients will connect.</description>

</property>

<property>

<name>hbase.zookeeper.quorum</name>

<value>localhost</value>

<description>Comma separated list of servers in the ZooKeeper Quorum.For
example,"host1.mydomain.com,host2.mydomain.com".By default this is set to
localhost for local and pseudo-distributed modes of operation. For
afully-distributed setup, this should be set to a full list of ZooKeeper
quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list
of servers which we will start/stop ZooKeeper on. </description>

</property>

   1. edit  <HBASE_HOME>/conf/hbase_env.sh
         1. export HBASE_MANAGES_ZK=false
         2. export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
      2. start hbase server
         1. <HBASE_HOME>/bin/start-hbase.sh
      3. verify by jps command
         1. HMaster
         2. HRegionServer

But when i am running webconsole of hbase @ http://localhost:60010 it is
not opening . Please suggest why ?


-- 
Thanks & Regards

Rahul Mehta

Re: hbase master is not starting @ 60010 on new ubutnu 11.04 system

Posted by Stack <st...@duboce.net>.
On Mon, Dec 5, 2011 at 9:22 AM, Harsh J <ha...@cloudera.com> wrote:
> Wow, that is almost a guide for the rest of us....
>
> Going through your steps, ...

Wow.  Nice response Harsh.  Its almost a guide for the rest of us (smile).
St.Ack

Re: hbase master is not starting @ 60010 on new ubutnu 11.04 system

Posted by Harsh J <ha...@cloudera.com>.
Wow, that is almost a guide for the rest of us. You missed only the logs.

Going through your steps, these are all I note:
1. Why use CDH3u1 when u2 is the latest?
2. You do not need 3 ZKs on a single machine. One is sufficient - you've only ended up complicating the setup there :)
3. The prop "hbase.master" probably does not apply anymore. IIRC, we source up the address via default interface instead, via DNS utils.
4. Use the Sun JDK/JRE, not OpenJDK/JRE.
5. Do a 'netstat -an | grep 60010' to figure out which interface it is listening to. It should be listening on all interfaces by default, but if that's not the case then that's your problem when you try "localhost:60010". Try setting the "hbase.master.info.bindAddress" to 'localhost' perhaps.
6. Disable IPv6, in either HBase/Hadoop, or in your Ubuntu itself. This could also be your issue.
7. Lastly, can you ensure you have HMaster and HRegionServer showing up properly on your jps after you try accessing it? Good to check if it didn't crash on you.
8. A message like "2011-11-09 12:10:35,611 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 60010" in the HMaster logs is assuring to see - if the HttpServer has started yet or not.

On 05-Dec-2011, at 12:38 PM, Rahul Mehta wrote:

> I am trying to install hbase on my local system ubutnu 11.04 .
> 
> what i have done to do that is :
> 
>   1. SSH Configuration
>      1. sh-keygen -t rsa
>      2. do the enter enter when it ask something.
>      3. cd ~/.ssh
>      4. sudo apt-get install openssh-server
>      5. ssh localhost
>   2. Installing and configuring Zookeeper
>      1. wget
>      http://archive.cloudera.com/cdh/3/zookeeper-3.3.3-cdh3u1.tar.gz
>      2. tar xzvf
> zookeeper-3.3.3-cdh3u1.tar.gz<http://archive.cloudera.com/cdh/3/zookeeper-3.3.3-cdh3u1.tar.gz>
>      3. mv zoo.cfg zoo1.cfg (in conf of zookeeper)
>      4. cp zoo1.cfg zoo2.cfg
>      5. cp zoo1.cfg zoo3.cfg
>      6. Make the three data directory for each config file , i made
>      /home/rahul/oodebesetup/data/zookeeper/data1 , data2 data3 resp. Change
>      dataDir for all three config file respectively.
>      7. make myid file in each data directory and  edit file respective
>      1,2 and 3 the id of server.
>      8. add this in all  three config  file at below
>         1. server.1=localhost:2878:3878
>         2. server.2=localhost:2879:3879
>         3. server.3=localhost:2880:3880
>      9. change Clientport of 1:2181 , 2: 2182 , 3 :2183 resp.
>      10. ./bin/zkServer.sh start <Zookeeper configuration file specified
>      in conf directory e.g. zoo1.cfg>
>      11. can check with jps command  there should be these processes
>         1. 3466 QuorumPeerMain
>         2. 3399 QuorumPeerMain
>         3. 3426 QuorumPeerMain
> 
> 
> 
>   1. Installing and configuring Hadoop
>      1. wget http://archive.cloudera.com/cdh/3/hadoop-0.20.2-cdh3u1.tar.gz
>      2. tar xzvf hadoop-0.20.2-cdh3u1.tar.gz
>      3. create data directory for hadoop with inside folder hdfs .
>      4. edit  <HADOOP_HOME>/conf/core_sites.xml
> 
> 
> <property>
> 
> <name>fs.default.name</name>
> 
> <value>hdfs://localhost:9000</value>
> 
> <description>This is the namenode uri</description>
> 
> </property>
> 
> <property>
> 
> <name>hadoop.tmp.dir</name>
> 
> <value>/home/rahul/oodebesetup/data/hadoop/hdfs</value>
> 
> <description>This is the namenode uri / directory</description>
> 
> </property>
> 
>   1. edit  <HADOOP_HOME>/conf/hdfs-site.xml
> 
> 
> <property>
> 
>              <name>dfs.replication</name>
> 
>     <value>1</value>
> 
>    <description>Default block replication.The actual number of
> replications can be specified when the file is created. The default is used
> if replication is not specified in create time. </description>
> 
> </property>
> 
>   1. edit  <HADOOP_HOME>/conf/mapred-site.xml
> 
> 
> <property>
> 
> <name>mapred.job.tracker</name>
> 
> <value>localhost:9001</value>
> 
> <description>The host and port that the MapReduce job tracker runs at. If
> "local", then jobs are run in-process as a single map and reduce
> task.</description>
> 
> </property>
> 
>   1. edit <HADOOP_HOME>/conf/hadoop-env.sh
>         1. export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
>      2. Formatting the namenode
>         1. <HADOOP_HOME>/bin/hadoop namenode -format
>      3.  Start Hadoop server
>         1. <HADOOP_HOME>/bin/start-all.sh
>      4. check with jps there should be these process.
>         1. 10119 DataNode
>         2. 10413 JobTracker
>         3. 10338 SecondaryNameNode
>         4. 10625 TaskTracker
>         5. 9897 NameNode
>      1. Installing and configuring HBase
>      1. wget http://archive.cloudera.com/cdh/3/hbase-0.90.3-cdh3u1.tar.gz
>      2. tar xzvf hbase-0.90.3-cdh3u1.tar.gz
>      3. edit <HBASE_HOME>/conf/hbase_site.xml
> 
> 
>   <property>
> 
> <name>hbase.master</name>
> 
> <value>localhost:60000</value>
> 
> <description>The host and port that the HBase master runs at.A value of
> 'local' runs the master and a regionserver in a single
> process.</description>
> 
> </property>
> 
> <property>
> 
> <name>hbase.rootdir</name>
> 
> <value>hdfs://localhost:9000/hdfs</value>
> 
> <description>The directory shared by region servers.</description>
> 
> </property>
> 
> <property>
> 
> <name>hbase.cluster.distributed</name>
> 
> <value>true</value>
> 
> <description>The mode the cluster will be in. Possible values are false:
> standalone and pseudo-distributed setups with managed Zookeeper true:
> fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
> 
> </description>
> 
> </property>
> 
> <property>
> 
> <name>hbase.zookeeper.property.clientPort</name>
> 
> <value>2181</value>
> 
> <description>Property from ZooKeeper's config zoo.cfg.The port at which the
> clients will connect.</description>
> 
> </property>
> 
> <property>
> 
> <name>hbase.zookeeper.quorum</name>
> 
> <value>localhost</value>
> 
> <description>Comma separated list of servers in the ZooKeeper Quorum.For
> example,"host1.mydomain.com,host2.mydomain.com".By default this is set to
> localhost for local and pseudo-distributed modes of operation. For
> afully-distributed setup, this should be set to a full list of ZooKeeper
> quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list
> of servers which we will start/stop ZooKeeper on. </description>
> 
> </property>
> 
>   1. edit  <HBASE_HOME>/conf/hbase_env.sh
>         1. export HBASE_MANAGES_ZK=false
>         2. export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
>      2. start hbase server
>         1. <HBASE_HOME>/bin/start-hbase.sh
>      3. verify by jps command
>         1. HMaster
>         2. HRegionServer
> 
> But when i am running webconsole of hbase @ http://localhost:60010 it is
> not opening . Please suggest why ?
> 
> 
> -- 
> Thanks & Regards
> 
> Rahul Mehta