You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by "Tzu-Li (Gordon) Tai" <tz...@apache.org> on 2017/12/01 04:16:24 UTC

Re: FlinkKafkaProducerXX

Hi Mike,

The rationale behind implementing the FlinkFixedPartitioner as the default
is so that each Flink sink partition (i.e. one sink parallel subtask) maps
to a single Kafka partition.

One other thing to clarify:
By setting the partitioner to null, the partitioning is based on a hash of
the record's attached key (the key retrieved from the
`SerializationSchema`), not round-robin.
To use round-robin partitioning, a custom partitioner should be provided.
Note however, a round-robin partitioner will create a lot of network
connections to all Kafka brokers on all Flink sink parallel subtasks, which
can be quite a lot.

To conclude this, I think the appropriate partitioning scheme depends on the
actual case.
For example, for some simple Flink job that only does some filtering of data
and has no aggregation within the pipeline, the key hash based partitioning
would probably be more ideal.
For more complex pipelines that partitions the computation by key already,
it could make sense that a direct mapping of a Flink sink partition to Kafka
partition would do.

On the other hand, considering that the key for each record is always
"re-calculated" by the `SerializationSchema` in each Flink Kafka Producer
sink partition, it might make sense to actually make the key hash
partitioner as the default instead.

Cheers,
Gordon



--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: FlinkKafkaProducerXX

Posted by Mikhail Pryakhin <m....@gmail.com>.
Exactly, at least it's worth mentioning the partitioner used by default in case none was specified in the javadoc, because the default behavior might not seem obvious.


Kind Regards,
Mike Pryakhin


> On 3 Dec 2017, at 22:08, Stephan Ewen <se...@apache.org> wrote:
> 
> Sounds like adding a round robin partitioner to the set of readily available partitioners would make sense.
> 
> On Fri, Dec 1, 2017 at 5:16 AM, Tzu-Li (Gordon) Tai <tzulitai@apache.org <ma...@apache.org>> wrote:
> Hi Mike,
> 
> The rationale behind implementing the FlinkFixedPartitioner as the default
> is so that each Flink sink partition (i.e. one sink parallel subtask) maps
> to a single Kafka partition.
> 
> One other thing to clarify:
> By setting the partitioner to null, the partitioning is based on a hash of
> the record's attached key (the key retrieved from the
> `SerializationSchema`), not round-robin.
> To use round-robin partitioning, a custom partitioner should be provided.
> Note however, a round-robin partitioner will create a lot of network
> connections to all Kafka brokers on all Flink sink parallel subtasks, which
> can be quite a lot.
> 
> To conclude this, I think the appropriate partitioning scheme depends on the
> actual case.
> For example, for some simple Flink job that only does some filtering of data
> and has no aggregation within the pipeline, the key hash based partitioning
> would probably be more ideal.
> For more complex pipelines that partitions the computation by key already,
> it could make sense that a direct mapping of a Flink sink partition to Kafka
> partition would do.
> 
> On the other hand, considering that the key for each record is always
> "re-calculated" by the `SerializationSchema` in each Flink Kafka Producer
> sink partition, it might make sense to actually make the key hash
> partitioner as the default instead.
> 
> Cheers,
> Gordon
> 
> 
> 
> --
> Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/ <http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/>
> 


Re: FlinkKafkaProducerXX

Posted by Stephan Ewen <se...@apache.org>.
Sounds like adding a round robin partitioner to the set of readily
available partitioners would make sense.

On Fri, Dec 1, 2017 at 5:16 AM, Tzu-Li (Gordon) Tai <tz...@apache.org>
wrote:

> Hi Mike,
>
> The rationale behind implementing the FlinkFixedPartitioner as the default
> is so that each Flink sink partition (i.e. one sink parallel subtask) maps
> to a single Kafka partition.
>
> One other thing to clarify:
> By setting the partitioner to null, the partitioning is based on a hash of
> the record's attached key (the key retrieved from the
> `SerializationSchema`), not round-robin.
> To use round-robin partitioning, a custom partitioner should be provided.
> Note however, a round-robin partitioner will create a lot of network
> connections to all Kafka brokers on all Flink sink parallel subtasks, which
> can be quite a lot.
>
> To conclude this, I think the appropriate partitioning scheme depends on
> the
> actual case.
> For example, for some simple Flink job that only does some filtering of
> data
> and has no aggregation within the pipeline, the key hash based partitioning
> would probably be more ideal.
> For more complex pipelines that partitions the computation by key already,
> it could make sense that a direct mapping of a Flink sink partition to
> Kafka
> partition would do.
>
> On the other hand, considering that the key for each record is always
> "re-calculated" by the `SerializationSchema` in each Flink Kafka Producer
> sink partition, it might make sense to actually make the key hash
> partitioner as the default instead.
>
> Cheers,
> Gordon
>
>
>
> --
> Sent from: http://apache-flink-user-mailing-list-archive.2336050.
> n4.nabble.com/
>