You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Shixiong Zhu (JIRA)" <ji...@apache.org> on 2017/02/13 07:02:41 UTC

[jira] [Resolved] (SPARK-19564) KafkaOffsetReader's consumers should not be in the same group

     [ https://issues.apache.org/jira/browse/SPARK-19564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Shixiong Zhu resolved SPARK-19564.
----------------------------------
       Resolution: Fixed
         Assignee: Liwei Lin
    Fix Version/s: 2.2.0
                   2.1.1

> KafkaOffsetReader's consumers should not be in the same group
> -------------------------------------------------------------
>
>                 Key: SPARK-19564
>                 URL: https://issues.apache.org/jira/browse/SPARK-19564
>             Project: Spark
>          Issue Type: Bug
>          Components: Structured Streaming
>    Affects Versions: 2.1.1, 2.2.0
>            Reporter: Liwei Lin
>            Assignee: Liwei Lin
>            Priority: Minor
>             Fix For: 2.1.1, 2.2.0
>
>
> In `KafkaOffsetReader`, when error occurs, we abort the existing consumer and create a new consumer. In our current implementation, the first consumer and the second consumer would be in the same group, which violates our intention of the two consumers not being in the same group.
> The cause is that, in our current implementation, the first consumer is created before `groupId` and `nextId` are initialized in the constructor. Then even if `groupId` and `nextId` are increased during the creation of that first consumer, `groupId` and `nextId` would still be initialized to default values in the constructor.
> We should make sure that `groupId` and `nextId` are initialized before any consumer is created.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org