You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zookeeper.apache.org by Chris Miles <ch...@chrismiles.org> on 2018/03/03 14:34:24 UTC

Fwd: Connection Factories for Curator / Zookeeper / HTTP Tunneling

Firstly, I apologise for the cross post, but I think this is a question
which may need to be seen by both users, and devs who understand the
underlying code.

I need to deploy Zookeeper to a firewall restricted cloud-foundry cloud,
where the only communication can happen between nodes is through HTTP,
so I am looking at ways of getting ZooKeeper communicating through HTTP
tunnelling.

As far as I can determine, ZooKeeper only allows the configuring of the
main client connection via server and client connection factories, but
not for the 2888 and 3888 connectivity, which is I think ((correct me if
wrong)) node to node communication on the first one, and leader election
on the second?

Does Zookeeper's connection handling give me any ability to intercept and
wrap the connections used for the rest of these ports? (Netty Http Tunnel).

I am willing to contribute to the source to get this functionality if required
as this is currently our only way of getting Zookeeper on our cloud.

thanks

Chris




Re: Connection Factories for Curator / Zookeeper / HTTP Tunneling

Posted by Abraham Fine <af...@apache.org>.
Hi Chris-

Wouldn't it also be posssible to make use of an external tunneling tool without modifying ZooKeeper at all?

Abe


On Sun, Mar 4, 2018, at 11:58, Chris Miles wrote:
> Thanks Mark. 
> 
> I've only had a glance at the code around the server connection factory, 
> and the fact there is a netty one there seems like a good sign as there 
> are some generic netty http tunnel examples out there. 
> 
> If there's anything you can suggest as a starter for ten, I'd be appreciated. 
> 
> Thanks
> 
> Chris 
> 
> Sent from my iPhone
> 
> > On 4 Mar 2018, at 11:40, Mark Fenes <mf...@cloudera.com> wrote:
> > 
> > 
> > Hi Chris,
> > 
> > yes, ports 2888 and 3888 are the default ports for quorum communication and leader election.
> > By default, ZK uses NIOServerCnxnFactory, unless the zookeeper.serverCnxnFactory system property is set to a different connection factory (e.g. Netty).
> > 
> > So, you would like to configure and run the ZooKeeper server instances so that the quorum communication and leader election would also take place on the HTTP port via tunnelling?
> > Let me check this as I need to do further research to answer this question.
> > 
> > And yes, should ZK not have this functionality, we would be very thankful for your willingness to contribute to the source code.
> > 
> > Regards,
> > Mark
> > 
> > 
> >> On Sat, Mar 3, 2018 at 3:34 PM, Chris Miles <ch...@chrismiles.org> wrote:
> >> 
> >> Firstly, I apologise for the cross post, but I think this is a question
> >> which may need to be seen by both users, and devs who understand the
> >> underlying code.
> >> 
> >> I need to deploy Zookeeper to a firewall restricted cloud-foundry cloud,
> >> where the only communication can happen between nodes is through HTTP,
> >> so I am looking at ways of getting ZooKeeper communicating through HTTP
> >> tunnelling.
> >> 
> >> As far as I can determine, ZooKeeper only allows the configuring of the
> >> main client connection via server and client connection factories, but
> >> not for the 2888 and 3888 connectivity, which is I think ((correct me if
> >> wrong)) node to node communication on the first one, and leader election
> >> on the second?
> >> 
> >> Does Zookeeper's connection handling give me any ability to intercept and
> >> wrap the connections used for the rest of these ports? (Netty Http Tunnel).
> >> 
> >> I am willing to contribute to the source to get this functionality if required
> >> as this is currently our only way of getting Zookeeper on our cloud.
> >> 
> >> thanks
> >> 
> >> Chris
> >> 
> >> 
> >> 
> > 

Re: Connection Factories for Curator / Zookeeper / HTTP Tunneling

Posted by Chris Miles <ch...@chrismiles.org>.
Thanks Mark. 

I've only had a glance at the code around the server connection factory, and the fact there is a netty one there seems like a good sign as there are some generic netty http tunnel examples out there. 

If there's anything you can suggest as a starter for ten, I'd be appreciated. 

Thanks

Chris 

Sent from my iPhone

> On 4 Mar 2018, at 11:40, Mark Fenes <mf...@cloudera.com> wrote:
> 
> 
> Hi Chris,
> 
> yes, ports 2888 and 3888 are the default ports for quorum communication and leader election.
> By default, ZK uses NIOServerCnxnFactory, unless the zookeeper.serverCnxnFactory system property is set to a different connection factory (e.g. Netty).
> 
> So, you would like to configure and run the ZooKeeper server instances so that the quorum communication and leader election would also take place on the HTTP port via tunnelling?
> Let me check this as I need to do further research to answer this question.
> 
> And yes, should ZK not have this functionality, we would be very thankful for your willingness to contribute to the source code.
> 
> Regards,
> Mark
> 
> 
>> On Sat, Mar 3, 2018 at 3:34 PM, Chris Miles <ch...@chrismiles.org> wrote:
>> 
>> Firstly, I apologise for the cross post, but I think this is a question
>> which may need to be seen by both users, and devs who understand the
>> underlying code.
>> 
>> I need to deploy Zookeeper to a firewall restricted cloud-foundry cloud,
>> where the only communication can happen between nodes is through HTTP,
>> so I am looking at ways of getting ZooKeeper communicating through HTTP
>> tunnelling.
>> 
>> As far as I can determine, ZooKeeper only allows the configuring of the
>> main client connection via server and client connection factories, but
>> not for the 2888 and 3888 connectivity, which is I think ((correct me if
>> wrong)) node to node communication on the first one, and leader election
>> on the second?
>> 
>> Does Zookeeper's connection handling give me any ability to intercept and
>> wrap the connections used for the rest of these ports? (Netty Http Tunnel).
>> 
>> I am willing to contribute to the source to get this functionality if required
>> as this is currently our only way of getting Zookeeper on our cloud.
>> 
>> thanks
>> 
>> Chris
>> 
>> 
>> 
> 

Re: Connection Factories for Curator / Zookeeper / HTTP Tunneling

Posted by Mark Fenes <mf...@cloudera.com>.
Hi Chris,

yes, ports 2888 and 3888 are the default ports for quorum communication and
leader election.
By default, ZK uses NIOServerCnxnFactory, unless the
zookeeper.serverCnxnFactory system property is set to a different
connection factory (e.g. Netty).

So, you would like to configure and run the ZooKeeper server instances so
that the quorum communication and leader election would also take place on
the HTTP port via tunnelling?
Let me check this as I need to do further research to answer this question.

And yes, should ZK not have this functionality, we would be very thankful
for your willingness to contribute to the source code.

Regards,
Mark


On Sat, Mar 3, 2018 at 3:34 PM, Chris Miles <ch...@chrismiles.org> wrote:

>
> Firstly, I apologise for the cross post, but I think this is a question
> which may need to be seen by both users, and devs who understand the
> underlying code.
>
> I need to deploy Zookeeper to a firewall restricted cloud-foundry cloud,
> where the only communication can happen between nodes is through HTTP,
> so I am looking at ways of getting ZooKeeper communicating through HTTP
> tunnelling.
>
> As far as I can determine, ZooKeeper only allows the configuring of the
> main client connection via server and client connection factories, but
> not for the 2888 and 3888 connectivity, which is I think ((correct me if
> wrong)) node to node communication on the first one, and leader election
> on the second?
>
> Does Zookeeper's connection handling give me any ability to intercept and
> wrap the connections used for the rest of these ports? (Netty Http Tunnel).
>
> I am willing to contribute to the source to get this functionality if
> required
> as this is currently our only way of getting Zookeeper on our cloud.
>
> thanks
>
> Chris
>
>
>
>