You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@nifi.apache.org by "Bartłomiej Tartanus (Jira)" <ji...@apache.org> on 2019/11/04 16:02:00 UTC

[jira] [Updated] (NIFI-6836) ConsumeKafkaRecord_1_0 reads data from first 100 topics

     [ https://issues.apache.org/jira/browse/NIFI-6836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Bartłomiej Tartanus updated NIFI-6836:
--------------------------------------
    Description: 
ConsumeKafkaRecord_1_0 reads data from first 100 topics and ignores the rest. This bug is caused by this line:
 [https://github.com/apache/nifi/blob/45ebeba846cf257c0dd27669c25e244208758fb9/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-1-0-processors/src/main/java/org/apache/nifi/processors/kafka/pubsub/ConsumeKafkaRecord_1_0.java#L316]

This might be easily fixed by removing {{,100}} but I assume someone put this limit for a reason. If this limit is needed then some mention in documentation should be added. Also a validator that checks if number of topics is not greater than 100.

I can prepare merge request if we agree what to do with this issue.

 

Current workaround is to add another ConsumeKafkaRecord_1_0 processor and split the list in parts, each containing at most 100 topics.

  was:
ConsumeKafkaRecord_1_0 reads data from first 100 topics and ignores the rest. This bug is caused by this line:
[https://github.com/apache/nifi/blob/45ebeba846cf257c0dd27669c25e244208758fb9/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-1-0-processors/src/main/java/org/apache/nifi/processors/kafka/pubsub/ConsumeKafkaRecord_1_0.java#L316]

This might be easily fixed by removing `,100` but I assume someone put this limit for a reason. If this limit is needed then some mention in documentation should be added. Also a validator that checks if number of topics is not greater than 100.

I can prepare merge request if we agree what to do with this issue.

 

Current workaround is to add another ConsumeKafkaRecord_1_0 processor and split the list in parts, each containing at most 100 topics.


> ConsumeKafkaRecord_1_0 reads data from first 100 topics
> -------------------------------------------------------
>
>                 Key: NIFI-6836
>                 URL: https://issues.apache.org/jira/browse/NIFI-6836
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Extensions
>    Affects Versions: 1.9.2
>            Reporter: Bartłomiej Tartanus
>            Priority: Major
>              Labels: easyfix
>
> ConsumeKafkaRecord_1_0 reads data from first 100 topics and ignores the rest. This bug is caused by this line:
>  [https://github.com/apache/nifi/blob/45ebeba846cf257c0dd27669c25e244208758fb9/nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-1-0-processors/src/main/java/org/apache/nifi/processors/kafka/pubsub/ConsumeKafkaRecord_1_0.java#L316]
> This might be easily fixed by removing {{,100}} but I assume someone put this limit for a reason. If this limit is needed then some mention in documentation should be added. Also a validator that checks if number of topics is not greater than 100.
> I can prepare merge request if we agree what to do with this issue.
>  
> Current workaround is to add another ConsumeKafkaRecord_1_0 processor and split the list in parts, each containing at most 100 topics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)