You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Sergey (JIRA)" <ji...@apache.org> on 2019/05/07 12:30:00 UTC

[jira] [Commented] (FLINK-12294) Kafka connector, work with grouping partitions

    [ https://issues.apache.org/jira/browse/FLINK-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16834712#comment-16834712 ] 

Sergey commented on FLINK-12294:
--------------------------------

[~gjy],

Hi

 As an example of realization attached code based on 1.8.0 flink-runtime (pardon for my java)

changed assignToKeyGroup method in org/apache/flink/runtime/state/KeyGroupRangeAssignment.java

and also add KeyGroupAssigner interface: org/apache/flink/runtime/state/KeyGroupAssigner.java

so it makes possible via this interface implementation control group distribution without any significant changing flink-runtime model.

> Kafka connector, work with grouping partitions
> ----------------------------------------------
>
>                 Key: FLINK-12294
>                 URL: https://issues.apache.org/jira/browse/FLINK-12294
>             Project: Flink
>          Issue Type: New Feature
>          Components: Connectors / Kafka, Runtime / Coordination
>            Reporter: Sergey
>            Priority: Major
>              Labels: performance
>         Attachments: KeyGroupAssigner.java, KeyGroupRangeAssignment.java
>
>
> Additional flag (with default false value) controlling whether topic partitions already grouped by the key. Exclude unnecessary shuffle/resorting operation when this parameter set to true. As an example, say we have client's payment transaction in a kafka topic. We grouping by clientId (transaction with the same clientId goes to one kafka topic partition) and the task is to find max transaction per client in sliding windows. In terms of map\reduce there is no needs to shuffle data between all topic consumers, may be it`s worth to do within each consumer to gain some speedup due to increasing number of executors within each partition data. With N messages (in partition) instead of N*ln(N) (current realization with shuffle/resorting) it will be just N operations. For windows with thousands events - the tenfold gain of execution speed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)