You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by colinc <co...@yahoo.co.uk> on 2021/01/18 18:58:39 UTC

Re: Partition mapping to Ignite node

We have found that the simplest way to achieve this is by overriding the
Kafka partitioner and assignor - and delegating to Ignite for partition
assignment and the affinity function respectively. In this way, Ignite
controls the data to partition and partition to node assignment and Kafka
reflects this. The AffinityFunction can be supplied with a topology
(collection of ClusterNode) that reflects the current Kafka consumers.
Ignite only uses the persistent node ID in the affinity function.

Implications seem to be:
* There must be at least as many Kafka partitions as Ignite partitions for a
given topic/cache. Ideally an equal number to keep things simple.
* There may be some remote operations during a topology change.
* The Ignite RendezvousAffinityFunction.assignPartition() consumes a
neighborhoodCache map. This used to ensure that primary and backup nodes do
not land on the same host when exclNeighbors=true. We haven't tested in this
configuration to determine whether it is deterministic.

The final point is important because the we are not using the internal
Ignite version of the AffinityFunction  but rather a version that has been
created for the purpose of the Kafka assignor. This works fine if the
AffinityFunction is stateless but the neighborhoodCache means that it
becomes stateful. If the state of the various AffinityFunction instances is
not in sync, then potentially invocations might be inconsistent even for the
same topology?

Does anyone have thoughts on viability of the above in case of
exclNeighbors=true?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Partition mapping to Ignite node

Posted by colinc <co...@yahoo.co.uk>.
Good answer, thanks. We'll use the backup filter to create a dedicated backup
subcluster.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Partition mapping to Ignite node

Posted by Ilya Kasnacheev <il...@gmail.com>.
Hello!

I think that you should just use AffinityBackupFilter, turn exclNeighbors
off. And please try to make it stateless/in sync between nodes. I can't
imagine what happens otherwise.

Regards,
-- 
Ilya Kasnacheev


пн, 18 янв. 2021 г. в 21:58, colinc <co...@yahoo.co.uk>:

> We have found that the simplest way to achieve this is by overriding the
> Kafka partitioner and assignor - and delegating to Ignite for partition
> assignment and the affinity function respectively. In this way, Ignite
> controls the data to partition and partition to node assignment and Kafka
> reflects this. The AffinityFunction can be supplied with a topology
> (collection of ClusterNode) that reflects the current Kafka consumers.
> Ignite only uses the persistent node ID in the affinity function.
>
> Implications seem to be:
> * There must be at least as many Kafka partitions as Ignite partitions for
> a
> given topic/cache. Ideally an equal number to keep things simple.
> * There may be some remote operations during a topology change.
> * The Ignite RendezvousAffinityFunction.assignPartition() consumes a
> neighborhoodCache map. This used to ensure that primary and backup nodes do
> not land on the same host when exclNeighbors=true. We haven't tested in
> this
> configuration to determine whether it is deterministic.
>
> The final point is important because the we are not using the internal
> Ignite version of the AffinityFunction  but rather a version that has been
> created for the purpose of the Kafka assignor. This works fine if the
> AffinityFunction is stateless but the neighborhoodCache means that it
> becomes stateful. If the state of the various AffinityFunction instances is
> not in sync, then potentially invocations might be inconsistent even for
> the
> same topology?
>
> Does anyone have thoughts on viability of the above in case of
> exclNeighbors=true?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>