You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Rob Drawsky <rd...@advent.com> on 2020/12/21 02:16:58 UTC

Re: Kubernetes - Access Ignite Cluster Externally

Roman, 

I know this is old, but thanks for your summary. I was able to get your
method to work, but with an additional step in the config.

It seems that if localAddress is not set via config, that ignite will
enumerate all interfaces (of which there are ususally more than one for
docker containers in general) and report those after discovery, causing
problems with joining the cluster -- I believe because I didn't have
AddressResolver entries for all of those interface IPs. 

To solve this, I set the localAddress (for both TcpDiscovery, and
TcpCommunication Spi's) to be the pods address (available via env), and use
that address for the AddressResolver mapping to the external nodePorts.

After doing so, Remote 'thick' ignite client is able to join the cluster
completely. However, I am only using a single ignite node as a server so
far.

In the next week or so, I will try to get this to work with the
KubernetesDiscoveryIpFinder which we use for create multi-node server
clusters in kubernetes.  I am unsure if using AddressResolver to do the
mapping will work in mulit-node clusters without additional steps (In this
case the NodePorts would be load balanced for external access) I believe,
and the kubernetes internal ignite pods may end up getting mapped to the
external ports as well. I am not sure ignite will be happy with that. It
would seem different contexts would be needed for AddressResolver.. the
internal nodes should talk to each other with internal ips, and the the
external client should use external addresses...and that may be a problem.

I am not very familiar with kubernetes api yet, and wonder if there may be a
way to create a kubernetes ipfinder for an external client/node, that would
be smart enough to query the the services external ports automatically. I
think that this approach may still have a problem in that the
TcpCommunications address received by the client after discovery will either
get in internal address if the internal kubernetes nodes aren't using an
AddressResolver, or if they are, the internal ignite nodes will end up using
the external addresses to talk to each other.  I am not sure it will be
possible for ignite to allow some nodes in the cluster to know about the
other nodes via one address, while other nodes to know it via a different
address, but it may work. Even if it does, it may be a trick to get the
right IP for the different contexts (internal versus external) to the other
nodes.

I am motivated to find a solution to this, as it solves some development
time problems for us; it would be great if someone could provide some
guidance or hints.

Thanks,
--Rob







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/