You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@zookeeper.apache.org by Bob Sheehan <bs...@vmware.com> on 2015/09/24 21:32:22 UTC
Tracking down possible network partition
We have similar issue:
3 node ZK cluster in DC1 (e.g. Las Vegas) .. quorum of 2. Each node on Vmware ESXI host in same rack.
2 observer ZK nodes in DC2 (e.g Germany).Each node on Vmware ESXI host in same rack.
Centos 6
ZK version Cloudera cdh 3.4.5.
* Looks like leader election in DC1 is taking a while ~15 minutes. At some point TCP connection to one of three nodes is lost. Eventualy repairs.
* Apparently during leader election connection lost to observers for ~15 minutes... then connection repaired. But we have 15 minute window where both observers (DC2) cannot communicate with ZK cluster (DC1). Our DC2 clients are comuunicating to observers using apache curator library. This causes our API to fail as it needs ZK data.
We used netstat on TCP ports and are seeing non 0 SENDQ size.
Is there any know fix/patch for this ? Suggestions welcome.
Thanks,
Bob
Re: Tracking down possible network partition
Posted by Isabel Muñoz Fernández <im...@etsisi.upm.es>.
Sergio:
MIra en el foro de Zookeeper cómo se refieren a nodos de ZK en diifrentes continentes y a los problemas de particiones. Parece que hay una figura (observers de Curator) que habrá que estudiar.
> On 24 Sep 2015, at 21:32, Bob Sheehan <bs...@vmware.com> wrote:
>
> We have similar issue:
>
> 3 node ZK cluster in DC1 (e.g. Las Vegas) .. quorum of 2. Each node on Vmware ESXI host in same rack.
>
> 2 observer ZK nodes in DC2 (e.g Germany).Each node on Vmware ESXI host in same rack.
>
> Centos 6
> ZK version Cloudera cdh 3.4.5.
>
>
> * Looks like leader election in DC1 is taking a while ~15 minutes. At some point TCP connection to one of three nodes is lost. Eventualy repairs.
>
>
> * Apparently during leader election connection lost to observers for ~15 minutes... then connection repaired. But we have 15 minute window where both observers (DC2) cannot communicate with ZK cluster (DC1). Our DC2 clients are comuunicating to observers using apache curator library. This causes our API to fail as it needs ZK data.
>
> We used netstat on TCP ports and are seeing non 0 SENDQ size.
>
>
> Is there any know fix/patch for this ? Suggestions welcome.
>
> Thanks,
>
> Bob
Re: ZOOKEEPER-1998
Posted by Steven Schlansker <ss...@opentable.com>.
On Oct 2, 2015, at 11:25 AM, Pramod Srinivasan <pr...@juniper.net> wrote:
>
> Are there any plans to fix this bug?
>
> https://issues.apache.org/jira/browse/ZOOKEEPER-1998
>
> We are hitting a unusual problem with applications getting stuck in the linux kernel when they call getaddrinfo
>
> This is due to a bug in linux kernel:
>
> http://www.spinics.net/lists/netdev/msg328772.html
>
> We hit this problem frequently as we have a number of our applications using the zookeeper client library and due to ZOOKEEPER-1998, our likelihood of hitting this problem increases. We have had mixed luck with getting a fix for the linux kernel bug, it would be good to reduce the probability at least.
The linked improvement would certainly be welcome.
We ran into the same problem in a different context, and found
that upgrading from kernel 4.0.4 to 4.0.9 dramatically
improved our situation.
Maybe a similar upgrade helps you out too.
RE: ZOOKEEPER-1998
Posted by Pramod Srinivasan <pr...@juniper.net>.
gentle nudge!
________________________________________
From: Pramod Srinivasan [pramod@juniper.net]
Sent: Thursday, September 24, 2015 6:56 PM
To: user@zookeeper.apache.org
Subject: ZOOKEEPER-1998
Hello Folks
Are there any plans to fix this bug?
https://issues.apache.org/jira/browse/ZOOKEEPER-1998
We are hitting a unusual problem with applications getting stuck in the linux kernel when they call getaddrinfo
This is due to a bug in linux kernel:
http://www.spinics.net/lists/netdev/msg328772.html
We hit this problem frequently as we have a number of our applications using the zookeeper client library and due to ZOOKEEPER-1998, our likelihood of hitting this problem increases. We have had mixed luck with getting a fix for the linux kernel bug, it would be good to reduce the probability at least.
Thanks,
Pramod
ZOOKEEPER-1998
Posted by Pramod Srinivasan <pr...@juniper.net>.
Hello Folks
Are there any plans to fix this bug?
https://issues.apache.org/jira/browse/ZOOKEEPER-1998
We are hitting a unusual problem with applications getting stuck in the linux kernel when they call getaddrinfo
This is due to a bug in linux kernel:
http://www.spinics.net/lists/netdev/msg328772.html
We hit this problem frequently as we have a number of our applications using the zookeeper client library and due to ZOOKEEPER-1998, our likelihood of hitting this problem increases. We have had mixed luck with getting a fix for the linux kernel bug, it would be good to reduce the probability at least.
Thanks,
Pramod
RE: Tracking down possible network partition
Posted by Akihiro Suda <su...@lab.ntt.co.jp>.
Hi,
this JIRA ticket seems related:
https://issues.apache.org/jira/browse/ZOOKEEPER-2246
The patch suggested in the ticket might be helpful for you.
Regards,
Akihiro Suda
-----Original Message-----
From: Bob Sheehan [mailto:bsheehan@vmware.com]
Sent: Friday, September 25, 2015 4:32 AM
To: user@zookeeper.apache.org
Subject: Tracking down possible network partition
We have similar issue:
3 node ZK cluster in DC1 (e.g. Las Vegas) .. quorum of 2. Each node on
Vmware ESXI host in same rack.
2 observer ZK nodes in DC2 (e.g Germany).Each node on Vmware ESXI host in
same rack.
Centos 6
ZK version Cloudera cdh 3.4.5.
* Looks like leader election in DC1 is taking a while ~15 minutes. At
some point TCP connection to one of three nodes is lost. Eventualy repairs.
* Apparently during leader election connection lost to observers for ~15
minutes... then connection repaired. But we have 15 minute window where both
observers (DC2) cannot communicate with ZK cluster (DC1). Our DC2 clients
are comuunicating to observers using apache curator library. This causes our
API to fail as it needs ZK data.
We used netstat on TCP ports and are seeing non 0 SENDQ size.
Is there any know fix/patch for this ? Suggestions welcome.
Thanks,
Bob