You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Manu S <ma...@gmail.com> on 2012/07/03 08:46:03 UTC

Re: HBase is able to connect to ZooKeeper but the connection closes immediately

Hi All,

This issue has been solved by passing Hbase configuration on the mapreduce
codes directly.

*conf* = HBaseConfiguration.*create*();

*conf*.clear();

*conf.set("hbase.zookeeper.quorum", "<namenode hostname/IP>");*
* *

*conf.set("hbase.zookeeper.property.clientPort","<client port>");*
* *

*conf.set("hbase.master", "<namenode hostname/IP>:60000");*
* *

* *

            *try* {

                  *htablepool* = *new* HTablePool(*conf*, 1);

                  System.*out*.println("Hbase Host Name:::::::"

                              + *conf*.get("hbase.zookeeper.quorum") + ":"

                              + *conf*.get(
"hbase.zookeeper.property.clientPort"));

            } *catch* (Exception e) {

                  e.printStackTrace();

            }

Thanks,
Manu S



>> We have installed *ZooKeeper *on our *Master node* which contains *Hbase-Master,
>> Namenode & Job tracker*.
>> 4 slave nodes having *Hbase-region, Datanode & Task tracker.*
>>
>> Zookeeper can be installed with Hbase-master, right?
>>
>> I did all the configuration changes in zoo.cfg & hbase-site.xml, copied
>> all the hadoop jar & common-configuration jar to Hbase/lib directory. But
>> still I am getting the same error
>>
>> *Caused by: org.apache.hadoop.hbase.**ZooKeeperConnectionException:
>> HBase is able to connect to ZooKeeper but the connection closes immediately.
>>
>>
>> This could be a sign that the server has too many connections (30 is the default).
>> Consider inspecting your ZK server logs for that error and then make sure you are reusing HBaseConfiguration as often as you can.
>>
>>
>> See HTable's javadoc for more information**.*
>>
>>
>> Is there any specific code modifications for writing output into HDFS
>> using mapreduce or configuration problem?? Because it is working fine in
>> pseudo-distributed node.
>>
>> Appreciate your help on the same.
>>
>> Thanks,
>> Manu S
>>
>> On Thu, Jun 7, 2012 at 10:57 PM, shashwat shriparv <
>> dwivedishashwat@gmail.com> wrote:
>>
>>> If you have separate zookeeper running no need to specify in hbase
>>> settings...
>>>
>>> On Thu, Jun 7, 2012 at 8:12 PM, Manu S <ma...@gmail.com> wrote:
>>>
>>> > Hi Tariq,
>>> >
>>> > Version: HBase-0.90.4
>>> > I downloaded commons-configuration-1.6.jar and put it inside
>>> HBASE_HOME/lib
>>> > & HADOOP_HOME/lib(In pseudo distributed) and tested.
>>> >
>>> > *hbase(main):002:0> status
>>> > 12/06/07 19:59:59 FATAL zookeeper.ZKConfig: The server in zoo.cfg
>>> cannot be
>>> > set to localhost in a fully-distributed setup because it won't be
>>> > reachable. See "Getting Started" for more information.
>>> >
>>> > ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is
>>> able
>>> > to connect to ZooKeeper but the connection closes immediately. This
>>> could
>>> > be a sign that the server has too many connections (30 is the default).
>>> > Consider inspecting your ZK server logs for that error and then make
>>> sure
>>> > you are reusing HBaseConfiguration as often as you can. See HTable's
>>> > javadoc for more information.*
>>> >
>>> > Is this common-configuration version is compatible for hbase-0.90.4?
>>> >
>>> >
>>> >
>>> > @Shashwat:
>>> > Thank you!!
>>> > Some parameters are missing in my hbase configuration, I will add
>>> these and
>>> > test. The zookeeper parameters I already added in zoo.cfg. Is that
>>> enough
>>> > or do I need to add once again in hbase configuration?
>>> >
>>> > Thanks,
>>> > Manu S
>>> >
>>> >
>>> > On Thu, Jun 7, 2012 at 7:11 PM, shashwat shriparv <
>>> > dwivedishashwat@gmail.com
>>> > > wrote:
>>> >
>>> > > Try this settings check what you have and what you dont have in the
>>> > > configuration :
>>> > >
>>> > >
>>> > > *<configuration>*
>>> > > *<property>*
>>> > > *<name>hbase.rootdir</name>*
>>> > > *<value>hdfs://{your machine name} or {localhost}:9000/hbase</value>*
>>> > > *</property>*
>>> > > *<property>*
>>> > > *<name>hbase.master</name>*
>>> > > *<value>{your machine name} or {localhost}:60000</value>*
>>> > > *<description>The host and port that the HBase master runs
>>> > > at.</description>
>>> > > *
>>> > > *</property>*
>>> > > *<property>*
>>> > > *<name>hbase.regionserver.port</name>*
>>> > > *<value>60020</value>*
>>> > > *<description>The host and port that the HBase master runs
>>> > > at.</description>
>>> > > *
>>> > > *</property>*
>>> > > *<!--<property>*
>>> > > *<name>hbase.master.port</name>*
>>> > > *<value>60000</value>*
>>> > > *<description>The host and port that the HBase master runs
>>> > > at.</description>
>>> > > *
>>> > > *</property>-->*
>>> > > *<property>*
>>> > > *<name>hbase.cluster.distributed</name>*
>>> > > *<value>true</value>*
>>> > > *</property>*
>>> > > *<property>*
>>> > > *<name>hbase.tmp.dir</name>*
>>> > > *<value>/home/shashwat/Hadoop/hbase-0.90.4/temp</value>*
>>> > > *</property>*
>>> > > *<property>*
>>> > > *<name>hbase.zookeeper.quorum</name>*
>>> > > *<value>{your machine name} or {localhost}</value>*
>>> > > *</property>*
>>> > > *<property>*
>>> > > *<name>dfs.replication</name>*
>>> > > *<value>1</value>*
>>> > > *</property>*
>>> > > *<property>*
>>> > > *<name>hbase.zookeeper.property.clientPort</name>*
>>> > > *<value>2181</value>*
>>> > > *<description>Property from ZooKeeper's config zoo.cfg.*
>>> > > *The port at which the clients will connect.*
>>> > > *</description>*
>>> > > *</property>*
>>> > > *<property>*
>>> > > *<name>hbase.zookeeper.property.dataDir</name>*
>>> > > *<value>/home/shashwat/zookeeper</value>*
>>> > > *<description>Property from ZooKeeper's config zoo.cfg.*
>>> > > *The directory where the snapshot is stored.*
>>> > > *</description>*
>>> > > *</property>*
>>> > > *
>>> > > *
>>> > > *
>>> > > *
>>> > > *<property>*
>>> > > * <name>zookeeper.session.timeout</name>*
>>> > > * <value>18000000</value>*
>>> > > * <description>Session Time out.</description>*
>>> > > *      </property>*
>>> > > * <property>*
>>> > > * <name>hbase.client.scanner.caching</name>*
>>> > > * <value>5000</value>*
>>> > > * </property>*
>>> > > *<property>*
>>> > > *<name>hbase.regionserver.lease.period</name>*
>>> > > *<value>2400000</value>*
>>> > > *</property>*
>>> > >
>>> > >
>>> > >
>>> > >
>>> > > Thanx and regards
>>> > >
>>> > > ∞
>>> > > Shashwat Shriparv
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > > On Thu, Jun 7, 2012 at 3:16 PM, Mohammad Tariq <do...@gmail.com>
>>> > wrote:
>>> > >
>>> > > > which distribution are you using??actually this is not possible..it
>>> > > > must be there..download it put it there
>>> > > >
>>> > > > Regards,
>>> > > >     Mohammad Tariq
>>> > > >
>>> > > >
>>> > > > On Thu, Jun 7, 2012 at 2:41 PM, Manu S <ma...@gmail.com>
>>> wrote:
>>> > > > > Hi Tariq,
>>> > > > >
>>> > > > > Thank you!!
>>> > > > > I already changed the maxClientCnxns to 1000.
>>> > > > > Also we have set CLASSPATH that includes all the Hadoop,HBase &
>>> > > Zookeper
>>> > > > > path's. I think copying hadoop .jar files to Hbase lib folder is
>>> the
>>> > > same
>>> > > > > affect of setting CLASSPATH with all the folders.
>>> > > > > There is no commons-configuration-*.jar inside hadoop/lib folder.
>>> > > > >
>>> > > > > Any other options?
>>> > > > >
>>> > > > > Thanks,
>>> > > > > Manu S
>>> > > > >
>>> > > > > On Thu, Jun 7, 2012 at 1:31 PM, Mohammad Tariq <
>>> dontariq@gmail.com>
>>> > > > wrote:
>>> > > > >
>>> > > > >> Actually zookeeper servers have an active connections limit,
>>> which
>>> > by
>>> > > > >> default is 30. You can increase this limit by setting
>>> maxClientCnxns
>>> > > > >> property accordingly in your zookeeper config file, zoo.cfg. For
>>> > > > >> example - maxClientCnxns=100....but before that copy the
>>> > > > >> hadoop-core-*.jar present inside hadoop folder to the hbase/lib
>>> > > > >> folder.Also copy commons-configuration-1.6.jar from hadoop/lib
>>> > folder
>>> > > > >> to hbase/lib folder and check it once and see if it works for
>>> you.
>>> > > > >>
>>> > > > >> Regards,
>>> > > > >>     Mohammad Tariq
>>> > > > >>
>>> > > > >>
>>> > > > >> On Thu, Jun 7, 2012 at 1:13 PM, Manu S <ma...@gmail.com>
>>> wrote:
>>> > > > >> > Hi All,
>>> > > > >> >
>>> > > > >> > Thank you for your reply.
>>> > > > >> >
>>> > > > >> > I tried all these options but still I am facing this issue.
>>> > > > >> >
>>> > > > >> > @Mayank: I tried the same, but still getting error.
>>> > > > >> > export
>>> > > > >> >
>>> > > > >>
>>> > > >
>>> > >
>>> >
>>> HADOOP_CLASSPATH="/usr/lib/hadoop/:/usr/lib/hadoop/lib/:/usr/lib/hadoop/conf/"
>>> > > > >> > export
>>> > > > >> >
>>> > > > >>
>>> > > >
>>> > >
>>> >
>>> HBASE_CLASSPATH="/usr/lib/hbase/:/usr/lib/hbase/lib/:/usr/lib/hbase/conf/:/usr/lib/zookeeper/:/usr/lib/zookeeper/conf/:/usr/lib/zookeeper/lib/"
>>> > > > >> > export CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CLASSPATH}"
>>> > > > >> >
>>> > > > >> > @Marcos & Tariq:
>>> > > > >> > We are using Hbase version 0.90.4
>>> > > > >> > Job creating single HBaseConfiguration object only
>>> > > > >> >
>>> > > > >> > @Kevin:
>>> > > > >> > No luck, same error
>>> > > > >> >
>>> > > > >> >
>>> > > > >> > Thanks,
>>> > > > >> > Manu S
>>> > > > >> >
>>> > > > >> > On Thu, Jun 7, 2012 at 3:50 AM, Mayank Bansal <
>>> mayank@apache.org>
>>> > > > wrote:
>>> > > > >> >
>>> > > > >> >>
>>> > > > >> >> zookeeper conf is not on the class path for the mapreduce
>>> job.
>>> > Add
>>> > > > conf
>>> > > > >> >>> file to class path for the job.
>>> > > > >> >>>
>>> > > > >> >>> Thanks,
>>> > > > >> >>> Mayank
>>> > > > >> >>>
>>> > > > >> >>>
>>> > > > >> >>> On Wed, Jun 6, 2012 at 7:25 AM, Manu S <manupkd87@gmail.com
>>> >
>>> > > wrote:
>>> > > > >> >>>
>>> > > > >> >>>> Hi All,
>>> > > > >> >>>>
>>> > > > >> >>>> We are running a mapreduce job in a fully distributed
>>> > cluster.The
>>> > > > >> output
>>> > > > >> >>>> of the job is writing to HBase.
>>> > > > >> >>>>
>>> > > > >> >>>> While running this job we are getting an error:
>>> > > > >> >>>>
>>> > > > >> >>>> *Caused by:
>>> > org.apache.hadoop.hbase.ZooKeeperConnectionException:
>>> > > > >> HBase is able to connect to ZooKeeper but the connection closes
>>> > > > >> immediately. This could be a sign that the server has too many
>>> > > > connections
>>> > > > >> (30 is the default). Consider inspecting your ZK server logs for
>>> > that
>>> > > > error
>>> > > > >> and then make sure you are reusing HBaseConfiguration as often
>>> as
>>> > you
>>> > > > can.
>>> > > > >> See HTable's javadoc for more information.*
>>> > > > >> >>>>     at
>>> > > > >>
>>> > > >
>>> > >
>>> >
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:155)
>>> > > > >> >>>>     at
>>> > > > >>
>>> > > >
>>> > >
>>> >
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1002)
>>> > > > >> >>>>     at
>>> > > > >>
>>> > > >
>>> > >
>>> >
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:304)
>>> > > > >> >>>>     at
>>> > > > >>
>>> > > >
>>> > >
>>> >
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:295)
>>> > > > >> >>>>     at
>>> > > > >>
>>> > > >
>>> > >
>>> >
>>> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:157)
>>> > > > >> >>>>     at
>>> > > > org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:169)
>>> > > > >> >>>>     at
>>> > > > >>
>>> > > >
>>> > >
>>> >
>>> org.apache.hadoop.hbase.client.HTableFactory.createHTableInterface(HTableFactory.java:36)
>>> > > > >> >>>>
>>> > > > >> >>>>
>>> > > > >> >>>> I had gone through some threads related to this issue and I
>>> > > > modified
>>> > > > >> the
>>> > > > >> >>>> *zoo.cfg* accordingly. These configurations are same in
>>> all the
>>> > > > nodes.
>>> > > > >> >>>> Please find the configuration of HBase & ZooKeeper:
>>> > > > >> >>>>
>>> > > > >> >>>> Hbase-site.xml:
>>> > > > >> >>>>
>>> > > > >> >>>> <configuration>
>>> > > > >> >>>>
>>> > > > >> >>>> <property>
>>> > > > >> >>>> <name>hbase.cluster.distributed</name>
>>> > > > >> >>>> <value>true</value>
>>> > > > >> >>>> </property>
>>> > > > >> >>>>
>>> > > > >> >>>> <property>
>>> > > > >> >>>> <name>hbase.rootdir</name>
>>> > > > >> >>>> <value>hdfs://namenode/hbase</value>
>>> > > > >> >>>> </property>
>>> > > > >> >>>>
>>> > > > >> >>>> <property>
>>> > > > >> >>>> <name>hbase.zookeeper.quorum</name>
>>> > > > >> >>>> <value>namenode</value>
>>> > > > >> >>>> </property>
>>> > > > >> >>>>
>>> > > > >> >>>> </configuration>
>>> > > > >> >>>>
>>> > > > >> >>>>
>>> > > > >> >>>> Zoo.cfg:
>>> > > > >> >>>>
>>> > > > >> >>>> # The number of milliseconds of each tick
>>> > > > >> >>>> tickTime=2000
>>> > > > >> >>>> # The number of ticks that the initial
>>> > > > >> >>>> # synchronization phase can take
>>> > > > >> >>>> initLimit=10
>>> > > > >> >>>> # The number of ticks that can pass between
>>> > > > >> >>>> # sending a request and getting an acknowledgement
>>> > > > >> >>>> syncLimit=5
>>> > > > >> >>>> # the directory where the snapshot is stored.
>>> > > > >> >>>> dataDir=/var/zookeeper
>>> > > > >> >>>> # the port at which the clients will connect
>>> > > > >> >>>> clientPort=2181
>>> > > > >> >>>> #server.0=localhost:2888:3888
>>> > > > >> >>>> server.0=namenode:2888:3888
>>> > > > >> >>>>
>>> > > > >> >>>> ################# Max Client connections
>>> ###################
>>> > > > >> >>>> *maxClientCnxns=1000
>>> > > > >> >>>> minSessionTimeout=4000
>>> > > > >> >>>> maxSessionTimeout=40000*
>>> > > > >> >>>>
>>> > > > >> >>>>
>>> > > > >> >>>> It would be really great if anyone can help me to resolve
>>> this
>>> > > > issue
>>> > > > >> by
>>> > > > >> >>>> giving your thoughts/suggestions.
>>> > > > >> >>>>
>>> > > > >> >>>> Thanks,
>>> > > > >> >>>> Manu S
>>> > > > >> >>>>
>>> > > > >> >>>
>>> > > > >> >>>
>>> > > > >> >>
>>> > > > >>
>>> > > >
>>> > >
>>> > >
>>> > >
>>> > > --
>>> > >
>>> > >
>>> > > ∞
>>> > > Shashwat Shriparv
>>> > >
>>> >
>>>
>>>
>>>
>>> --
>>>
>>>
>>> ∞
>>> Shashwat Shriparv
>>>
>>
>>
> n pseudo-distributed node.