You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nifi.apache.org by Ward Jans <wa...@ixor.be.INVALID> on 2020/06/16 07:59:00 UTC

Exposing Nifi UI when running a cluster in Docker Swarm

Hi all,

I'm having an issue getting my Nifi UI exposed to end users.  I'm running
an Apache Nifi cluster on docker swarm using below configuration:

version: '3'

services:

  zookeeper:
    hostname: zookeeper
    image: 'bitnami/zookeeper:latest'
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes

  nifi:
    image: apache/nifi:latest
    ports:
      - 8080
    environment:
      - NIFI_WEB_HTTP_PORT=8080
      - NIFI_CLUSTER_IS_NODE=true
      - NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
      - NIFI_ZK_CONNECT_STRING=zookeeper:2181
      - NIFI_ELECTION_MAX_WAIT=1 min

This works fine and I can easily scale up the number of Nifi instances.

However, when trying to access the Nifi UI via the published port, it
doesn't seem to work. I get a connection refused when trying to access it
via any of the swarm nodes.

ID                  NAME            MODE                REPLICAS
IMAGE                                   PORTS
klp9kjm7jwdy        nifi            replicated          3/3
apache/nifi:latest                      *:30003->8080/tcp
qa3rf9pi6uyw        zookeeper       replicated          1/1
bitnami/zookeeper:latest

The problem seems to be related to the fact that Nifi is binding to the
hostname for the host it runs on. Causing it to be only available inside
the swarm network by using it's container id.

This does work from within any container inside the swarm network, but not
via the published port.

I also tried configuring NIFI_WEB_HTTP_HOST=0.0.0.0 to make sure Nifi binds
to all network interfaces, but that breaks communication between the
instances in the cluster.

How should I configure Nifi/Docker swarm for being able to properly access
Nifi's UI through the swarm routing mesh network?


Thanks in advance!

Re: Exposing Nifi UI when running a cluster in Docker Swarm

Posted by Ward Jans <wa...@ixor.be.INVALID>.
Thanks for your detailed explanation, the network interface configuration
was the part I was missing.

Got it up and running now by using the same approach.  I'm only running
unsecured instances at this point, but I'll keep in mind all your remarks
when moving on.  Including the one about Kubernetes.

On Tue, 16 Jun 2020 at 10:36, Chris Sampson
<ch...@naimuri.com.invalid> wrote:

> I previously found it was necessary (for Docker Swarm setups) to add all
> the network interfaces the Swarm configures into your NiFi containers as
> entries in the nifi.properties file using the appropriate Web Properties
> [1] for nifi.web.http.network.interface* (or the https equivalent if you
> choose to secure the UI).
>
> The network interfaces can be found by exec'ing into a running container
> and using a utility such as ifconfig.
>
> Looking back at the Docker Swarm setup I had (but no longer use), we had
> two Docker Swarm networks defined to separate our components into
> "frontend" and "backend" - that's not necessary but mentioning because
> it'll explain the following example code a little better:
>
> >     # assuming network interface assignments remain constant between
> service restarts/stack deployments:    # eth0 = frontend; eth1 = ingress;
> eth2 = backend; eth3 = unknown (but apparently necessary)
> NIFI_HTTPS_NETWORK_INTERFACE_DEFAULT: eth0
> NIFI_HTTPS_NETWORK_INTERFACE_ETH1: eth1
> NIFI_HTTPS_NETWORK_INTERFACE_ETH2: eth2
> NIFI_HTTPS_NETWORK_INTERFACE_ETH3: eth3
> >
> > We achieved the above by introducing a new startup script into the Docker
> Image that would use this environment variables (from our docker-compose
> file) and set them into nifi.properties, i.e.:
>
> > #!/bin/sh -e
> >
> > properties_file=${NIFI_HOME}/conf/nifi.properties
> >
> > add_network_entries() {  ENV_VAR_PREFIX=$1  NIFI_PROPERTY_GROUP=$2  for
> ENTRY in $(printenv | grep "^${ENV_VAR_PREFIX}" | grep -v "_DEFAULT="); do
>   KEY="$(echo "${ENTRY}" | cut -d = -f 1 | sed 's/.*_//' | tr '[:upper:]'
> '[:lower:]')"    INTERFACE="$(echo "${ENTRY}" | sed 's/[^=]\+=//')"    echo
> "Adding '${INTERFACE}' to nifi.properties as
> 'nifi.web.${NIFI_PROPERTY_GROUP}.network.interface.${KEY}"    sed -i -e
> "/nifi.web.${NIFI_PROPERTY_GROUP}.network.interface.default=.*/a
> nifi.web.${NIFI_PROPERTY_GROUP}.network.interface.${KEY}=${INTERFACE}"
> "${properties_file}"  done}# delete existing network interface configs (in
> case left over from previous runs of start.sh scripts)sed -i
> '/nifi.web.http.network.interface./d' "${properties_file}"sed -i
> '/nifi.web.https.network.interface./d' "${properties_file}"# (re)populate
> the default network interfaces (even if blank)sed -i -e
> "/nifi.web.http.port=/a
> nifi.web.http.network.interface.default=${NIFI_HTTP_NETWORK_INTERFACE_DEFAULT:-}"
> "${properties_file}"sed -i -e "/nifi.web.https.port=/a
> nifi.web.https.network.interface.default=${NIFI_HTTPS_NETWORK_INTERFACE_DEFAULT:-}"
> "${properties_file}"# iterate through any NIFI_HTTP(S)_NETWORK_INTERFACE_*
> environment variables, adding them as network interface
> entriesadd_network_entries "NIFI_HTTP_NETWORK_INTERFACE_"
> "http"add_network_entries "NIFI_HTTPS_NETWORK_INTERFACE_" "https"
> >
> >
> It was also necessary to change the provided start.sh script to use fully
> qualified domain names for hosts in the NIFI_WEB_HTTP(S)_HOST too (and/or
> the equivalent properties for nifi.remote.input.host and
> nifi.cluster.node.address), so:
>
> > sed -ie 's/$HOSTNAME}/&.$(dnsdomainname)/' /opt/nifi/scripts/start.sh
> >
> >
> It is, unfortunately, currently a bit fiddly trying to get NiFi running
> within Docker Swarm it seems (we've subsequently moved to Kubernetes and
> chose to inject nifi.properties into the container rather than using the
> provided image start.sh scripts).
>
>
> Hope it helps.
>
>
> [1]:
>
> https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#web-properties
>
> *Chris Sampson*
> IT Consultant
> chris.sampson@naimuri.com
>
>
>
> On Tue, 16 Jun 2020 at 08:59, Ward Jans <wa...@ixor.be.invalid> wrote:
>
> > Hi all,
> >
> > I'm having an issue getting my Nifi UI exposed to end users.  I'm running
> > an Apache Nifi cluster on docker swarm using below configuration:
> >
> > version: '3'
> >
> > services:
> >
> >   zookeeper:
> >     hostname: zookeeper
> >     image: 'bitnami/zookeeper:latest'
> >     environment:
> >       - ALLOW_ANONYMOUS_LOGIN=yes
> >
> >   nifi:
> >     image: apache/nifi:latest
> >     ports:
> >       - 8080
> >     environment:
> >       - NIFI_WEB_HTTP_PORT=8080
> >       - NIFI_CLUSTER_IS_NODE=true
> >       - NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
> >       - NIFI_ZK_CONNECT_STRING=zookeeper:2181
> >       - NIFI_ELECTION_MAX_WAIT=1 min
> >
> > This works fine and I can easily scale up the number of Nifi instances.
> >
> > However, when trying to access the Nifi UI via the published port, it
> > doesn't seem to work. I get a connection refused when trying to access it
> > via any of the swarm nodes.
> >
> > ID                  NAME            MODE                REPLICAS
> > IMAGE                                   PORTS
> > klp9kjm7jwdy        nifi            replicated          3/3
> > apache/nifi:latest                      *:30003->8080/tcp
> > qa3rf9pi6uyw        zookeeper       replicated          1/1
> > bitnami/zookeeper:latest
> >
> > The problem seems to be related to the fact that Nifi is binding to the
> > hostname for the host it runs on. Causing it to be only available inside
> > the swarm network by using it's container id.
> >
> > This does work from within any container inside the swarm network, but
> not
> > via the published port.
> >
> > I also tried configuring NIFI_WEB_HTTP_HOST=0.0.0.0 to make sure Nifi
> binds
> > to all network interfaces, but that breaks communication between the
> > instances in the cluster.
> >
> > How should I configure Nifi/Docker swarm for being able to properly
> access
> > Nifi's UI through the swarm routing mesh network?
> >
> >
> > Thanks in advance!
> >
>

Re: Exposing Nifi UI when running a cluster in Docker Swarm

Posted by Chris Sampson <ch...@naimuri.com.INVALID>.
I previously found it was necessary (for Docker Swarm setups) to add all
the network interfaces the Swarm configures into your NiFi containers as
entries in the nifi.properties file using the appropriate Web Properties
[1] for nifi.web.http.network.interface* (or the https equivalent if you
choose to secure the UI).

The network interfaces can be found by exec'ing into a running container
and using a utility such as ifconfig.

Looking back at the Docker Swarm setup I had (but no longer use), we had
two Docker Swarm networks defined to separate our components into
"frontend" and "backend" - that's not necessary but mentioning because
it'll explain the following example code a little better:

>     # assuming network interface assignments remain constant between service restarts/stack deployments:    # eth0 = frontend; eth1 = ingress; eth2 = backend; eth3 = unknown (but apparently necessary)    NIFI_HTTPS_NETWORK_INTERFACE_DEFAULT: eth0    NIFI_HTTPS_NETWORK_INTERFACE_ETH1: eth1    NIFI_HTTPS_NETWORK_INTERFACE_ETH2: eth2    NIFI_HTTPS_NETWORK_INTERFACE_ETH3: eth3
>
> We achieved the above by introducing a new startup script into the Docker
Image that would use this environment variables (from our docker-compose
file) and set them into nifi.properties, i.e.:

> #!/bin/sh -e
>
> properties_file=${NIFI_HOME}/conf/nifi.properties
>
> add_network_entries() {  ENV_VAR_PREFIX=$1  NIFI_PROPERTY_GROUP=$2  for ENTRY in $(printenv | grep "^${ENV_VAR_PREFIX}" | grep -v "_DEFAULT="); do    KEY="$(echo "${ENTRY}" | cut -d = -f 1 | sed 's/.*_//' | tr '[:upper:]' '[:lower:]')"    INTERFACE="$(echo "${ENTRY}" | sed 's/[^=]\+=//')"    echo "Adding '${INTERFACE}' to nifi.properties as 'nifi.web.${NIFI_PROPERTY_GROUP}.network.interface.${KEY}"    sed -i -e "/nifi.web.${NIFI_PROPERTY_GROUP}.network.interface.default=.*/a nifi.web.${NIFI_PROPERTY_GROUP}.network.interface.${KEY}=${INTERFACE}" "${properties_file}"  done}# delete existing network interface configs (in case left over from previous runs of start.sh scripts)sed -i '/nifi.web.http.network.interface./d' "${properties_file}"sed -i '/nifi.web.https.network.interface./d' "${properties_file}"# (re)populate the default network interfaces (even if blank)sed -i -e "/nifi.web.http.port=/a nifi.web.http.network.interface.default=${NIFI_HTTP_NETWORK_INTERFACE_DEFAULT:-}" "${properties_file}"sed -i -e "/nifi.web.https.port=/a nifi.web.https.network.interface.default=${NIFI_HTTPS_NETWORK_INTERFACE_DEFAULT:-}" "${properties_file}"# iterate through any NIFI_HTTP(S)_NETWORK_INTERFACE_* environment variables, adding them as network interface entriesadd_network_entries "NIFI_HTTP_NETWORK_INTERFACE_" "http"add_network_entries "NIFI_HTTPS_NETWORK_INTERFACE_" "https"
>
>
It was also necessary to change the provided start.sh script to use fully
qualified domain names for hosts in the NIFI_WEB_HTTP(S)_HOST too (and/or
the equivalent properties for nifi.remote.input.host and
nifi.cluster.node.address), so:

> sed -ie 's/$HOSTNAME}/&.$(dnsdomainname)/' /opt/nifi/scripts/start.sh
>
>
It is, unfortunately, currently a bit fiddly trying to get NiFi running
within Docker Swarm it seems (we've subsequently moved to Kubernetes and
chose to inject nifi.properties into the container rather than using the
provided image start.sh scripts).


Hope it helps.


[1]:
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#web-properties

*Chris Sampson*
IT Consultant
chris.sampson@naimuri.com



On Tue, 16 Jun 2020 at 08:59, Ward Jans <wa...@ixor.be.invalid> wrote:

> Hi all,
>
> I'm having an issue getting my Nifi UI exposed to end users.  I'm running
> an Apache Nifi cluster on docker swarm using below configuration:
>
> version: '3'
>
> services:
>
>   zookeeper:
>     hostname: zookeeper
>     image: 'bitnami/zookeeper:latest'
>     environment:
>       - ALLOW_ANONYMOUS_LOGIN=yes
>
>   nifi:
>     image: apache/nifi:latest
>     ports:
>       - 8080
>     environment:
>       - NIFI_WEB_HTTP_PORT=8080
>       - NIFI_CLUSTER_IS_NODE=true
>       - NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
>       - NIFI_ZK_CONNECT_STRING=zookeeper:2181
>       - NIFI_ELECTION_MAX_WAIT=1 min
>
> This works fine and I can easily scale up the number of Nifi instances.
>
> However, when trying to access the Nifi UI via the published port, it
> doesn't seem to work. I get a connection refused when trying to access it
> via any of the swarm nodes.
>
> ID                  NAME            MODE                REPLICAS
> IMAGE                                   PORTS
> klp9kjm7jwdy        nifi            replicated          3/3
> apache/nifi:latest                      *:30003->8080/tcp
> qa3rf9pi6uyw        zookeeper       replicated          1/1
> bitnami/zookeeper:latest
>
> The problem seems to be related to the fact that Nifi is binding to the
> hostname for the host it runs on. Causing it to be only available inside
> the swarm network by using it's container id.
>
> This does work from within any container inside the swarm network, but not
> via the published port.
>
> I also tried configuring NIFI_WEB_HTTP_HOST=0.0.0.0 to make sure Nifi binds
> to all network interfaces, but that breaks communication between the
> instances in the cluster.
>
> How should I configure Nifi/Docker swarm for being able to properly access
> Nifi's UI through the swarm routing mesh network?
>
>
> Thanks in advance!
>