You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nifi.apache.org by Mark Bean <ma...@gmail.com> on 2018/10/31 17:45:29 UTC

Load Balancing

I am trying to understand how the load balancing works in NiFi 1.8.0.

I have a 2-node Cluster. I set an UpdateAttribute to set the value of a
property, "balancer", to either 0 or 1. I am using stateful EL for this:
${getStateValue('balancer'):plus(1):mod(2)}.

The connection for the output of the UpdateAttribute processor is load
balanced.
Load Balance Strategy: Partition by attribute
Attribute name: balancer

The queue in the connection contains 5 objects, but when I perform a "List
queue", I only see 3 flowfiles. All of the flowfiles are on the same Node,
and as expected have the same "balancer" attribute value.

Presumably, the other 2 flowfiles were load-balanced to the other Node.
However, they should still be visible in List queue, correct?

Perhaps related, the load balance icon on the connection indicates
"Actively balancing...". There are only two 10 byte files, but the
balancing never seems to complete.

Re: Load Balancing

Posted by Koji Kawamura <ij...@gmail.com>.
Hi Mark,

> In this scenario, should the nifi.cluster.load.balance.comms.timeout have caused the balancing operation to terminate (unsuccessful)?

I agree with that. Wasn't there any WARN log messages written?
Currently NiFi UI doesn't have capability to show load-balancing
related error on the canvas, other than currently active or not.

> Another question: the usage of nifi.cluster.load.balance.host (and .port) values is not clear to me. If Node A set's this value for Node B's FQDN, would this allow Node A to "spoof" Node B and accept load balanced items intended for Node B?

The two properties are used how a NiFi node opens its socket to
receive load-balanced data from other node. It can be useful when a
node have different NICs and you want to use specific one for
load-balancing. If you specify a hostname or ip-address that is not
for the node, then you'll get an exception when the node tries opening
a socket since it can't bind to the specified address.

Thanks,
Koji
On Thu, Nov 1, 2018 at 3:04 AM mark.o.bean@gmail.com
<ma...@gmail.com> wrote:
>
> I found the problem: iptables was blocking the load balancing port. Once the port was opened, the balance completed and all files were visible via List queue.
>
> In this scenario, should the nifi.cluster.load.balance.comms.timeout have caused the balancing operation to terminate (unsuccessful)?
>
> Another question: the usage of nifi.cluster.load.balance.host (and .port) values is not clear to me. If Node A set's this value for Node B's FQDN, would this allow Node A to "spoof" Node B and accept load balanced items intended for Node B?
>
>
> On 2018/10/31 17:45:29, Mark Bean <ma...@gmail.com> wrote:
> > I am trying to understand how the load balancing works in NiFi 1.8.0.
> >
> > I have a 2-node Cluster. I set an UpdateAttribute to set the value of a
> > property, "balancer", to either 0 or 1. I am using stateful EL for this:
> > ${getStateValue('balancer'):plus(1):mod(2)}.
> >
> > The connection for the output of the UpdateAttribute processor is load
> > balanced.
> > Load Balance Strategy: Partition by attribute
> > Attribute name: balancer
> >
> > The queue in the connection contains 5 objects, but when I perform a "List
> > queue", I only see 3 flowfiles. All of the flowfiles are on the same Node,
> > and as expected have the same "balancer" attribute value.
> >
> > Presumably, the other 2 flowfiles were load-balanced to the other Node.
> > However, they should still be visible in List queue, correct?
> >
> > Perhaps related, the load balance icon on the connection indicates
> > "Actively balancing...". There are only two 10 byte files, but the
> > balancing never seems to complete.
> >

Re: Load Balancing

Posted by ma...@gmail.com, ma...@gmail.com.
I found the problem: iptables was blocking the load balancing port. Once the port was opened, the balance completed and all files were visible via List queue.

In this scenario, should the nifi.cluster.load.balance.comms.timeout have caused the balancing operation to terminate (unsuccessful)?

Another question: the usage of nifi.cluster.load.balance.host (and .port) values is not clear to me. If Node A set's this value for Node B's FQDN, would this allow Node A to "spoof" Node B and accept load balanced items intended for Node B?


On 2018/10/31 17:45:29, Mark Bean <ma...@gmail.com> wrote: 
> I am trying to understand how the load balancing works in NiFi 1.8.0.
> 
> I have a 2-node Cluster. I set an UpdateAttribute to set the value of a
> property, "balancer", to either 0 or 1. I am using stateful EL for this:
> ${getStateValue('balancer'):plus(1):mod(2)}.
> 
> The connection for the output of the UpdateAttribute processor is load
> balanced.
> Load Balance Strategy: Partition by attribute
> Attribute name: balancer
> 
> The queue in the connection contains 5 objects, but when I perform a "List
> queue", I only see 3 flowfiles. All of the flowfiles are on the same Node,
> and as expected have the same "balancer" attribute value.
> 
> Presumably, the other 2 flowfiles were load-balanced to the other Node.
> However, they should still be visible in List queue, correct?
> 
> Perhaps related, the load balance icon on the connection indicates
> "Actively balancing...". There are only two 10 byte files, but the
> balancing never seems to complete.
>