You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Alexander Nieuwenhuijse <du...@gmail.com> on 2017/03/13 11:07:15 UTC

Kafka on multi-node Kubernetes: External traffic issues

Hi all,

I've been trying to setup a Kafka cluster using Kubernetes, but after being
stuck for a few days I'm looking for some help. Hopefully someone here can
help me out.

Kubernetes setup:
3 dedicated machines, which have the following Kubernetes settings:
Service which exposes ports and DNS records: kafka-service.yaml
<http://pastebin.com/10ZT9C2X>
Stateful set which creates 3 pods and generates the server.config files:
kafka-stateful.yaml <http://pastebin.com/C0P4DES0>
Example generated kafka configuration file: server.config
<http://pastebin.com/HJaEjYND>

When running the kafka-console-producer.sh from outside of kubernetes the
following happens:
state-change.log <http://pastebin.com/g5uXSk53>
external-producer.log <http://pastebin.com/04PSgBht>

If I am reading this correct then all the brokers can correctly create the
new topic and a leader is correctly elected. However, when the producer
tries to send a message it receives an error
response: NOT_LEADER_FOR_PARTITION. When testing from within the Kubernetes
cluster, within the actual pod, everything works fine.

I've explicitly set the advertised listeners to the node's host such that
the producer can reach the broker, which should solve the Kubernetes
network issues? Could it maybe also be caused by the fact that I run
ZooKeeper also in Kubernetes and the producer might run into some
connection issues there? (I do not see any indication of this in the log
files)

Please let me know if more information is needed to help debug this issue.

With kind regards,

Alexander