You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@nifi.apache.org by "ASF subversion and git services (Jira)" <ji...@apache.org> on 2021/03/24 14:00:10 UTC

[jira] [Commented] (NIFI-8357) ConsumeKafka(Record)_2_0, ConsumeKafka(Record)_2_6 do not reconnect if using statically assigned partitions

    [ https://issues.apache.org/jira/browse/NIFI-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17307844#comment-17307844 ] 

ASF subversion and git services commented on NIFI-8357:
-------------------------------------------------------

Commit 74ea3840ac98c8deff1ab83f673cc8fcb7072bcd in nifi's branch refs/heads/main from Mark Payne
[ https://gitbox.apache.org/repos/asf?p=nifi.git;h=74ea384 ]

NIFI-8357: Updated Kafka 2.6 processors to automatically handle recreating Consumer Lease objects when an existing one is poisoned, even if using statically assigned partitions

This closes #4926.

Signed-off-by: Peter Turcsanyi <tu...@apache.org>


> ConsumeKafka(Record)_2_0, ConsumeKafka(Record)_2_6 do not reconnect if using statically assigned partitions
> -----------------------------------------------------------------------------------------------------------
>
>                 Key: NIFI-8357
>                 URL: https://issues.apache.org/jira/browse/NIFI-8357
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Extensions
>            Reporter: Mark Payne
>            Assignee: Mark Payne
>            Priority: Critical
>             Fix For: 1.14.0
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> If using statically assigned partitions in ConsumeKafka_2_0, ConsumeKafkaRecord_2_0, ConsumeKafka_2_6, or ConsumeKafkaRecord_2_6 (via adding {{partitions.}}{{<hostname}}> properties), when a client connection fails, it recreates connections but does not properly assign the partitions. As a result, the consumer stops consuming data from its partition(s), and the Kafka client that gets created gets leaked. This can slowly build up to leaking many of these connections and potentially could exhaust heap or cause IOException: too many open files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)