You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by Ted Yu <yu...@gmail.com> on 2011/01/04 20:09:36 UTC

consistent KeeperException$ConnectionLossException

Hi,
I am using HBase 0.90 and our job fails consistently with the following
exception:

Caused by: org.apache.hadoop.hbase.ZooKeeperConnectionException:
org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase
	at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:147)
	at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1035)
	... 19 more
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
	at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:608)
	at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:902)
	at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:133)
	... 20 more

Zookeeper quorum runs on the same node as NameNode. HMaster is on another
node. Hadoop is cdh3b2.

In zookeeper log, I see (10.202.50.79 is the same node where the exception
above happened):

2011-01-04 18:47:40,633 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
many connections from /10.202.50.79 - max is 30
2011-01-04 18:47:41,187 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
many connections from /10.202.50.79 - max is 30
2011-01-04 18:47:42,375 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
many connections from /10.202.50.79 - max is 30
2011-01-04 18:47:42,447 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
many connections from /10.202.50.79 - max is 30
2011-01-04 18:47:43,113 WARN org.apache.zookeeper.server.NIOServerCnxn:
EndOfStreamException: Unable to read additional data from client sessionid
0x12d5220eb970025, likely client has closed socket
2011-01-04 18:47:43,113 INFO org.apache.zookeeper.server.NIOServerCnxn:
Closed socket connection for client /10.202.50.79:37845 which had sessionid
0x12d5220eb970025
2011-01-04 18:47:43,113 WARN org.apache.zookeeper.server.NIOServerCnxn:
EndOfStreamException: Unable to read additional data from client sessionid
0x12d5220eb970087, likely client has closed socket

Please advise what parameter I should tune.

Thanks

Re: consistent KeeperException$ConnectionLossException

Posted by Ted Yu <yu...@gmail.com>.
Our code has been using new HTable(config, tableName) and this issue started
to occur dealing with relatively large data set.

On Tue, Jan 4, 2011 at 7:23 PM, Stack <st...@duboce.net> wrote:

> When you make a new HTable, do you do new HTable(tableName) or new
> HTable(config, tableName)?  If you are using the latter, you still run
> into the below?
>
> St.Ack
>
> On Tue, Jan 4, 2011 at 6:47 PM, Ted Yu <yu...@gmail.com> wrote:
> > I increased max connections to 40.
> > I still got:
> >
> > 2011-01-04 21:30:05,701 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Too
> > many connections from /10.202.50.79 - max is 40
> > 2011-01-04 21:30:06,072 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Too
> > many connections from /10.202.50.80 - max is 40
> > 2011-01-04 21:30:06,458 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Too
> > many connections from /10.202.50.80 - max is 40
> > 2011-01-04 21:30:06,944 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Too
> > many connections from /10.202.50.79 - max is 40
> > 2011-01-04 21:30:07,273 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Too
> > many connections from /10.202.50.80 - max is 40
> > 2011-01-04 21:30:07,665 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Too
> > many connections from /10.202.50.79 - max is 40
> > 2011-01-04 21:30:07,876 WARN org.apache.zookeeper.server.NIOServerCnxn:
> > EndOfStreamException: Unable to read additional data from client
> sessionid
> > 0x12d52be9c2b001b, likely client has closed socket
> > 2011-01-04 21:30:07,876 INFO org.apache.zookeeper.server.NIOServerCnxn:
> > Closed socket connection for client /10.202.50.79:43150 which had
> sessionid
> > 0x12d52be9c2b001b
> > 2011-01-04 21:30:07,876 WARN org.apache.zookeeper.server.NIOServerCnxn:
> > EndOfStreamException: Unable to read additional data from client
> sessionid
> > 0x12d52be9c2b008b, likely client has closed socket
> > 2011-01-04 21:30:07,876 INFO org.apache.zookeeper.server.NIOServerCnxn:
> > Closed socket connection for client /10.202.50.79:26104 which had
> sessionid
> > 0x12d52be9c2b008b
> > 2011-01-04 21:30:07,876 WARN org.apache.zookeeper.server.NIOServerCnxn:
> > EndOfStreamException: Unable to read additional data from client
> sessionid
> > 0x12d52be9c2b010c, likely client has closed socket
> >
> > I verified maxClientCnxns of 30 in 0.20.6 where we didn't experience this
> > problem.
> >
> > More comment is welcome.
> >
> > On Tue, Jan 4, 2011 at 11:47 AM, Ted Yu <yu...@gmail.com> wrote:
> >
> >> So I should be using HTablePool.
> >> For 0.20.6, I didn't see ConnectionLossException this often.
> >>
> >> I wonder if something changed from 0.20.6 to 0.90
> >>
> >> On Tue, Jan 4, 2011 at 11:29 AM, Stack <st...@duboce.net> wrote:
> >>
> >>> Are you passing the same Configuration instance when creating your
> >>> HTables?   See
> >>>
> http://people.apache.org/~stack/hbase-0.90.0-candidate-2/docs/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html<http://people.apache.org/%7Estack/hbase-0.90.0-candidate-2/docs/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html>
> <
> http://people.apache.org/%7Estack/hbase-0.90.0-candidate-2/docs/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html
> >
> >>> if not.  It explains how we figure whether zk client, rpc connections,
> >>> etc. are shared or not.
> >>>
> >>> St.Ack
> >>>
> >>> On Tue, Jan 4, 2011 at 11:12 AM, Jean-Daniel Cryans <
> jdcryans@apache.org>
> >>> wrote:
> >>> > It's a zookeeper setting, you cannot have by default more than 30
> >>> > connections from the same IP per ZK peer.
> >>> >
> >>> > If HBase is starting ZK for you, do change
> >>> > hbase.zookeeper.property.maxClientCnxns
> >>> >
> >>> > J-D
> >>> >
> >>> > On Tue, Jan 4, 2011 at 11:09 AM, Ted Yu <yu...@gmail.com> wrote:
> >>> >> Hi,
> >>> >> I am using HBase 0.90 and our job fails consistently with the
> following
> >>> >> exception:
> >>> >>
> >>> >> Caused by: org.apache.hadoop.hbase.ZooKeeperConnectionException:
> >>> >> org.apache.zookeeper.KeeperException$ConnectionLossException:
> >>> >> KeeperErrorCode = ConnectionLoss for /hbase
> >>> >>        at
> >>>
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:147)
> >>> >>        at
> >>>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1035)
> >>> >>        ... 19 more
> >>> >> Caused by:
> >>> org.apache.zookeeper.KeeperException$ConnectionLossException:
> >>> >> KeeperErrorCode = ConnectionLoss for /hbase
> >>> >>        at
> >>> org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
> >>> >>        at
> >>> org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
> >>> >>        at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:608)
> >>> >>        at
> >>>
> org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:902)
> >>> >>        at
> >>>
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:133)
> >>> >>        ... 20 more
> >>> >>
> >>> >> Zookeeper quorum runs on the same node as NameNode. HMaster is on
> >>> another
> >>> >> node. Hadoop is cdh3b2.
> >>> >>
> >>> >> In zookeeper log, I see (10.202.50.79 is the same node where the
> >>> exception
> >>> >> above happened):
> >>> >>
> >>> >> 2011-01-04 18:47:40,633 WARN
> org.apache.zookeeper.server.NIOServerCnxn:
> >>> Too
> >>> >> many connections from /10.202.50.79 - max is 30
> >>> >> 2011-01-04 18:47:41,187 WARN
> org.apache.zookeeper.server.NIOServerCnxn:
> >>> Too
> >>> >> many connections from /10.202.50.79 - max is 30
> >>> >> 2011-01-04 18:47:42,375 WARN
> org.apache.zookeeper.server.NIOServerCnxn:
> >>> Too
> >>> >> many connections from /10.202.50.79 - max is 30
> >>> >> 2011-01-04 18:47:42,447 WARN
> org.apache.zookeeper.server.NIOServerCnxn:
> >>> Too
> >>> >> many connections from /10.202.50.79 - max is 30
> >>> >> 2011-01-04 18:47:43,113 WARN
> org.apache.zookeeper.server.NIOServerCnxn:
> >>> >> EndOfStreamException: Unable to read additional data from client
> >>> sessionid
> >>> >> 0x12d5220eb970025, likely client has closed socket
> >>> >> 2011-01-04 18:47:43,113 INFO
> org.apache.zookeeper.server.NIOServerCnxn:
> >>> >> Closed socket connection for client /10.202.50.79:37845 which had
> >>> sessionid
> >>> >> 0x12d5220eb970025
> >>> >> 2011-01-04 18:47:43,113 WARN
> org.apache.zookeeper.server.NIOServerCnxn:
> >>> >> EndOfStreamException: Unable to read additional data from client
> >>> sessionid
> >>> >> 0x12d5220eb970087, likely client has closed socket
> >>> >>
> >>> >> Please advise what parameter I should tune.
> >>> >>
> >>> >> Thanks
> >>> >>
> >>> >
> >>>
> >>
> >>
> >
>

Re: consistent KeeperException$ConnectionLossException

Posted by Stack <st...@duboce.net>.
When you make a new HTable, do you do new HTable(tableName) or new
HTable(config, tableName)?  If you are using the latter, you still run
into the below?

St.Ack

On Tue, Jan 4, 2011 at 6:47 PM, Ted Yu <yu...@gmail.com> wrote:
> I increased max connections to 40.
> I still got:
>
> 2011-01-04 21:30:05,701 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
> many connections from /10.202.50.79 - max is 40
> 2011-01-04 21:30:06,072 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
> many connections from /10.202.50.80 - max is 40
> 2011-01-04 21:30:06,458 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
> many connections from /10.202.50.80 - max is 40
> 2011-01-04 21:30:06,944 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
> many connections from /10.202.50.79 - max is 40
> 2011-01-04 21:30:07,273 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
> many connections from /10.202.50.80 - max is 40
> 2011-01-04 21:30:07,665 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
> many connections from /10.202.50.79 - max is 40
> 2011-01-04 21:30:07,876 WARN org.apache.zookeeper.server.NIOServerCnxn:
> EndOfStreamException: Unable to read additional data from client sessionid
> 0x12d52be9c2b001b, likely client has closed socket
> 2011-01-04 21:30:07,876 INFO org.apache.zookeeper.server.NIOServerCnxn:
> Closed socket connection for client /10.202.50.79:43150 which had sessionid
> 0x12d52be9c2b001b
> 2011-01-04 21:30:07,876 WARN org.apache.zookeeper.server.NIOServerCnxn:
> EndOfStreamException: Unable to read additional data from client sessionid
> 0x12d52be9c2b008b, likely client has closed socket
> 2011-01-04 21:30:07,876 INFO org.apache.zookeeper.server.NIOServerCnxn:
> Closed socket connection for client /10.202.50.79:26104 which had sessionid
> 0x12d52be9c2b008b
> 2011-01-04 21:30:07,876 WARN org.apache.zookeeper.server.NIOServerCnxn:
> EndOfStreamException: Unable to read additional data from client sessionid
> 0x12d52be9c2b010c, likely client has closed socket
>
> I verified maxClientCnxns of 30 in 0.20.6 where we didn't experience this
> problem.
>
> More comment is welcome.
>
> On Tue, Jan 4, 2011 at 11:47 AM, Ted Yu <yu...@gmail.com> wrote:
>
>> So I should be using HTablePool.
>> For 0.20.6, I didn't see ConnectionLossException this often.
>>
>> I wonder if something changed from 0.20.6 to 0.90
>>
>> On Tue, Jan 4, 2011 at 11:29 AM, Stack <st...@duboce.net> wrote:
>>
>>> Are you passing the same Configuration instance when creating your
>>> HTables?   See
>>> http://people.apache.org/~stack/hbase-0.90.0-candidate-2/docs/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html<http://people.apache.org/%7Estack/hbase-0.90.0-candidate-2/docs/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html>
>>> if not.  It explains how we figure whether zk client, rpc connections,
>>> etc. are shared or not.
>>>
>>> St.Ack
>>>
>>> On Tue, Jan 4, 2011 at 11:12 AM, Jean-Daniel Cryans <jd...@apache.org>
>>> wrote:
>>> > It's a zookeeper setting, you cannot have by default more than 30
>>> > connections from the same IP per ZK peer.
>>> >
>>> > If HBase is starting ZK for you, do change
>>> > hbase.zookeeper.property.maxClientCnxns
>>> >
>>> > J-D
>>> >
>>> > On Tue, Jan 4, 2011 at 11:09 AM, Ted Yu <yu...@gmail.com> wrote:
>>> >> Hi,
>>> >> I am using HBase 0.90 and our job fails consistently with the following
>>> >> exception:
>>> >>
>>> >> Caused by: org.apache.hadoop.hbase.ZooKeeperConnectionException:
>>> >> org.apache.zookeeper.KeeperException$ConnectionLossException:
>>> >> KeeperErrorCode = ConnectionLoss for /hbase
>>> >>        at
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:147)
>>> >>        at
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1035)
>>> >>        ... 19 more
>>> >> Caused by:
>>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>>> >> KeeperErrorCode = ConnectionLoss for /hbase
>>> >>        at
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
>>> >>        at
>>> org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
>>> >>        at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:608)
>>> >>        at
>>> org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:902)
>>> >>        at
>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:133)
>>> >>        ... 20 more
>>> >>
>>> >> Zookeeper quorum runs on the same node as NameNode. HMaster is on
>>> another
>>> >> node. Hadoop is cdh3b2.
>>> >>
>>> >> In zookeeper log, I see (10.202.50.79 is the same node where the
>>> exception
>>> >> above happened):
>>> >>
>>> >> 2011-01-04 18:47:40,633 WARN org.apache.zookeeper.server.NIOServerCnxn:
>>> Too
>>> >> many connections from /10.202.50.79 - max is 30
>>> >> 2011-01-04 18:47:41,187 WARN org.apache.zookeeper.server.NIOServerCnxn:
>>> Too
>>> >> many connections from /10.202.50.79 - max is 30
>>> >> 2011-01-04 18:47:42,375 WARN org.apache.zookeeper.server.NIOServerCnxn:
>>> Too
>>> >> many connections from /10.202.50.79 - max is 30
>>> >> 2011-01-04 18:47:42,447 WARN org.apache.zookeeper.server.NIOServerCnxn:
>>> Too
>>> >> many connections from /10.202.50.79 - max is 30
>>> >> 2011-01-04 18:47:43,113 WARN org.apache.zookeeper.server.NIOServerCnxn:
>>> >> EndOfStreamException: Unable to read additional data from client
>>> sessionid
>>> >> 0x12d5220eb970025, likely client has closed socket
>>> >> 2011-01-04 18:47:43,113 INFO org.apache.zookeeper.server.NIOServerCnxn:
>>> >> Closed socket connection for client /10.202.50.79:37845 which had
>>> sessionid
>>> >> 0x12d5220eb970025
>>> >> 2011-01-04 18:47:43,113 WARN org.apache.zookeeper.server.NIOServerCnxn:
>>> >> EndOfStreamException: Unable to read additional data from client
>>> sessionid
>>> >> 0x12d5220eb970087, likely client has closed socket
>>> >>
>>> >> Please advise what parameter I should tune.
>>> >>
>>> >> Thanks
>>> >>
>>> >
>>>
>>
>>
>

Re: consistent KeeperException$ConnectionLossException

Posted by Ted Yu <yu...@gmail.com>.
I increased max connections to 40.
I still got:

2011-01-04 21:30:05,701 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
many connections from /10.202.50.79 - max is 40
2011-01-04 21:30:06,072 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
many connections from /10.202.50.80 - max is 40
2011-01-04 21:30:06,458 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
many connections from /10.202.50.80 - max is 40
2011-01-04 21:30:06,944 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
many connections from /10.202.50.79 - max is 40
2011-01-04 21:30:07,273 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
many connections from /10.202.50.80 - max is 40
2011-01-04 21:30:07,665 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
many connections from /10.202.50.79 - max is 40
2011-01-04 21:30:07,876 WARN org.apache.zookeeper.server.NIOServerCnxn:
EndOfStreamException: Unable to read additional data from client sessionid
0x12d52be9c2b001b, likely client has closed socket
2011-01-04 21:30:07,876 INFO org.apache.zookeeper.server.NIOServerCnxn:
Closed socket connection for client /10.202.50.79:43150 which had sessionid
0x12d52be9c2b001b
2011-01-04 21:30:07,876 WARN org.apache.zookeeper.server.NIOServerCnxn:
EndOfStreamException: Unable to read additional data from client sessionid
0x12d52be9c2b008b, likely client has closed socket
2011-01-04 21:30:07,876 INFO org.apache.zookeeper.server.NIOServerCnxn:
Closed socket connection for client /10.202.50.79:26104 which had sessionid
0x12d52be9c2b008b
2011-01-04 21:30:07,876 WARN org.apache.zookeeper.server.NIOServerCnxn:
EndOfStreamException: Unable to read additional data from client sessionid
0x12d52be9c2b010c, likely client has closed socket

I verified maxClientCnxns of 30 in 0.20.6 where we didn't experience this
problem.

More comment is welcome.

On Tue, Jan 4, 2011 at 11:47 AM, Ted Yu <yu...@gmail.com> wrote:

> So I should be using HTablePool.
> For 0.20.6, I didn't see ConnectionLossException this often.
>
> I wonder if something changed from 0.20.6 to 0.90
>
> On Tue, Jan 4, 2011 at 11:29 AM, Stack <st...@duboce.net> wrote:
>
>> Are you passing the same Configuration instance when creating your
>> HTables?   See
>> http://people.apache.org/~stack/hbase-0.90.0-candidate-2/docs/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html<http://people.apache.org/%7Estack/hbase-0.90.0-candidate-2/docs/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html>
>> if not.  It explains how we figure whether zk client, rpc connections,
>> etc. are shared or not.
>>
>> St.Ack
>>
>> On Tue, Jan 4, 2011 at 11:12 AM, Jean-Daniel Cryans <jd...@apache.org>
>> wrote:
>> > It's a zookeeper setting, you cannot have by default more than 30
>> > connections from the same IP per ZK peer.
>> >
>> > If HBase is starting ZK for you, do change
>> > hbase.zookeeper.property.maxClientCnxns
>> >
>> > J-D
>> >
>> > On Tue, Jan 4, 2011 at 11:09 AM, Ted Yu <yu...@gmail.com> wrote:
>> >> Hi,
>> >> I am using HBase 0.90 and our job fails consistently with the following
>> >> exception:
>> >>
>> >> Caused by: org.apache.hadoop.hbase.ZooKeeperConnectionException:
>> >> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> >> KeeperErrorCode = ConnectionLoss for /hbase
>> >>        at
>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:147)
>> >>        at
>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1035)
>> >>        ... 19 more
>> >> Caused by:
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> >> KeeperErrorCode = ConnectionLoss for /hbase
>> >>        at
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
>> >>        at
>> org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
>> >>        at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:608)
>> >>        at
>> org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:902)
>> >>        at
>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:133)
>> >>        ... 20 more
>> >>
>> >> Zookeeper quorum runs on the same node as NameNode. HMaster is on
>> another
>> >> node. Hadoop is cdh3b2.
>> >>
>> >> In zookeeper log, I see (10.202.50.79 is the same node where the
>> exception
>> >> above happened):
>> >>
>> >> 2011-01-04 18:47:40,633 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> Too
>> >> many connections from /10.202.50.79 - max is 30
>> >> 2011-01-04 18:47:41,187 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> Too
>> >> many connections from /10.202.50.79 - max is 30
>> >> 2011-01-04 18:47:42,375 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> Too
>> >> many connections from /10.202.50.79 - max is 30
>> >> 2011-01-04 18:47:42,447 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> Too
>> >> many connections from /10.202.50.79 - max is 30
>> >> 2011-01-04 18:47:43,113 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> >> EndOfStreamException: Unable to read additional data from client
>> sessionid
>> >> 0x12d5220eb970025, likely client has closed socket
>> >> 2011-01-04 18:47:43,113 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> >> Closed socket connection for client /10.202.50.79:37845 which had
>> sessionid
>> >> 0x12d5220eb970025
>> >> 2011-01-04 18:47:43,113 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> >> EndOfStreamException: Unable to read additional data from client
>> sessionid
>> >> 0x12d5220eb970087, likely client has closed socket
>> >>
>> >> Please advise what parameter I should tune.
>> >>
>> >> Thanks
>> >>
>> >
>>
>
>

Re: consistent KeeperException$ConnectionLossException

Posted by Ted Yu <yu...@gmail.com>.
So I should be using HTablePool.
For 0.20.6, I didn't see ConnectionLossException this often.

I wonder if something changed from 0.20.6 to 0.90

On Tue, Jan 4, 2011 at 11:29 AM, Stack <st...@duboce.net> wrote:

> Are you passing the same Configuration instance when creating your
> HTables?   See
> http://people.apache.org/~stack/hbase-0.90.0-candidate-2/docs/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html<http://people.apache.org/%7Estack/hbase-0.90.0-candidate-2/docs/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html>
> if not.  It explains how we figure whether zk client, rpc connections,
> etc. are shared or not.
>
> St.Ack
>
> On Tue, Jan 4, 2011 at 11:12 AM, Jean-Daniel Cryans <jd...@apache.org>
> wrote:
> > It's a zookeeper setting, you cannot have by default more than 30
> > connections from the same IP per ZK peer.
> >
> > If HBase is starting ZK for you, do change
> > hbase.zookeeper.property.maxClientCnxns
> >
> > J-D
> >
> > On Tue, Jan 4, 2011 at 11:09 AM, Ted Yu <yu...@gmail.com> wrote:
> >> Hi,
> >> I am using HBase 0.90 and our job fails consistently with the following
> >> exception:
> >>
> >> Caused by: org.apache.hadoop.hbase.ZooKeeperConnectionException:
> >> org.apache.zookeeper.KeeperException$ConnectionLossException:
> >> KeeperErrorCode = ConnectionLoss for /hbase
> >>        at
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:147)
> >>        at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1035)
> >>        ... 19 more
> >> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException:
> >> KeeperErrorCode = ConnectionLoss for /hbase
> >>        at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
> >>        at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
> >>        at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:608)
> >>        at
> org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:902)
> >>        at
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:133)
> >>        ... 20 more
> >>
> >> Zookeeper quorum runs on the same node as NameNode. HMaster is on
> another
> >> node. Hadoop is cdh3b2.
> >>
> >> In zookeeper log, I see (10.202.50.79 is the same node where the
> exception
> >> above happened):
> >>
> >> 2011-01-04 18:47:40,633 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Too
> >> many connections from /10.202.50.79 - max is 30
> >> 2011-01-04 18:47:41,187 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Too
> >> many connections from /10.202.50.79 - max is 30
> >> 2011-01-04 18:47:42,375 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Too
> >> many connections from /10.202.50.79 - max is 30
> >> 2011-01-04 18:47:42,447 WARN org.apache.zookeeper.server.NIOServerCnxn:
> Too
> >> many connections from /10.202.50.79 - max is 30
> >> 2011-01-04 18:47:43,113 WARN org.apache.zookeeper.server.NIOServerCnxn:
> >> EndOfStreamException: Unable to read additional data from client
> sessionid
> >> 0x12d5220eb970025, likely client has closed socket
> >> 2011-01-04 18:47:43,113 INFO org.apache.zookeeper.server.NIOServerCnxn:
> >> Closed socket connection for client /10.202.50.79:37845 which had
> sessionid
> >> 0x12d5220eb970025
> >> 2011-01-04 18:47:43,113 WARN org.apache.zookeeper.server.NIOServerCnxn:
> >> EndOfStreamException: Unable to read additional data from client
> sessionid
> >> 0x12d5220eb970087, likely client has closed socket
> >>
> >> Please advise what parameter I should tune.
> >>
> >> Thanks
> >>
> >
>

Re: consistent KeeperException$ConnectionLossException

Posted by Stack <st...@duboce.net>.
Are you passing the same Configuration instance when creating your
HTables?   See http://people.apache.org/~stack/hbase-0.90.0-candidate-2/docs/apidocs/org/apache/hadoop/hbase/client/HConnectionManager.html
if not.  It explains how we figure whether zk client, rpc connections,
etc. are shared or not.

St.Ack

On Tue, Jan 4, 2011 at 11:12 AM, Jean-Daniel Cryans <jd...@apache.org> wrote:
> It's a zookeeper setting, you cannot have by default more than 30
> connections from the same IP per ZK peer.
>
> If HBase is starting ZK for you, do change
> hbase.zookeeper.property.maxClientCnxns
>
> J-D
>
> On Tue, Jan 4, 2011 at 11:09 AM, Ted Yu <yu...@gmail.com> wrote:
>> Hi,
>> I am using HBase 0.90 and our job fails consistently with the following
>> exception:
>>
>> Caused by: org.apache.hadoop.hbase.ZooKeeperConnectionException:
>> org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase
>>        at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:147)
>>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1035)
>>        ... 19 more
>> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException:
>> KeeperErrorCode = ConnectionLoss for /hbase
>>        at org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
>>        at org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
>>        at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:608)
>>        at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:902)
>>        at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:133)
>>        ... 20 more
>>
>> Zookeeper quorum runs on the same node as NameNode. HMaster is on another
>> node. Hadoop is cdh3b2.
>>
>> In zookeeper log, I see (10.202.50.79 is the same node where the exception
>> above happened):
>>
>> 2011-01-04 18:47:40,633 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
>> many connections from /10.202.50.79 - max is 30
>> 2011-01-04 18:47:41,187 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
>> many connections from /10.202.50.79 - max is 30
>> 2011-01-04 18:47:42,375 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
>> many connections from /10.202.50.79 - max is 30
>> 2011-01-04 18:47:42,447 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
>> many connections from /10.202.50.79 - max is 30
>> 2011-01-04 18:47:43,113 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> EndOfStreamException: Unable to read additional data from client sessionid
>> 0x12d5220eb970025, likely client has closed socket
>> 2011-01-04 18:47:43,113 INFO org.apache.zookeeper.server.NIOServerCnxn:
>> Closed socket connection for client /10.202.50.79:37845 which had sessionid
>> 0x12d5220eb970025
>> 2011-01-04 18:47:43,113 WARN org.apache.zookeeper.server.NIOServerCnxn:
>> EndOfStreamException: Unable to read additional data from client sessionid
>> 0x12d5220eb970087, likely client has closed socket
>>
>> Please advise what parameter I should tune.
>>
>> Thanks
>>
>

Re: consistent KeeperException$ConnectionLossException

Posted by Jean-Daniel Cryans <jd...@apache.org>.
It's a zookeeper setting, you cannot have by default more than 30
connections from the same IP per ZK peer.

If HBase is starting ZK for you, do change
hbase.zookeeper.property.maxClientCnxns

J-D

On Tue, Jan 4, 2011 at 11:09 AM, Ted Yu <yu...@gmail.com> wrote:
> Hi,
> I am using HBase 0.90 and our job fails consistently with the following
> exception:
>
> Caused by: org.apache.hadoop.hbase.ZooKeeperConnectionException:
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
>        at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:147)
>        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1035)
>        ... 19 more
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
>        at org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
>        at org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
>        at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:608)
>        at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:902)
>        at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:133)
>        ... 20 more
>
> Zookeeper quorum runs on the same node as NameNode. HMaster is on another
> node. Hadoop is cdh3b2.
>
> In zookeeper log, I see (10.202.50.79 is the same node where the exception
> above happened):
>
> 2011-01-04 18:47:40,633 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
> many connections from /10.202.50.79 - max is 30
> 2011-01-04 18:47:41,187 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
> many connections from /10.202.50.79 - max is 30
> 2011-01-04 18:47:42,375 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
> many connections from /10.202.50.79 - max is 30
> 2011-01-04 18:47:42,447 WARN org.apache.zookeeper.server.NIOServerCnxn: Too
> many connections from /10.202.50.79 - max is 30
> 2011-01-04 18:47:43,113 WARN org.apache.zookeeper.server.NIOServerCnxn:
> EndOfStreamException: Unable to read additional data from client sessionid
> 0x12d5220eb970025, likely client has closed socket
> 2011-01-04 18:47:43,113 INFO org.apache.zookeeper.server.NIOServerCnxn:
> Closed socket connection for client /10.202.50.79:37845 which had sessionid
> 0x12d5220eb970025
> 2011-01-04 18:47:43,113 WARN org.apache.zookeeper.server.NIOServerCnxn:
> EndOfStreamException: Unable to read additional data from client sessionid
> 0x12d5220eb970087, likely client has closed socket
>
> Please advise what parameter I should tune.
>
> Thanks
>