You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@nifi.apache.org by "Mark Payne (JIRA)" <ji...@apache.org> on 2017/12/08 16:22:00 UTC

[jira] [Updated] (NIFI-4680) Improve error handling in Publish/Consume Kafka processors

     [ https://issues.apache.org/jira/browse/NIFI-4680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Mark Payne updated NIFI-4680:
-----------------------------
    Fix Version/s: 1.5.0
           Status: Patch Available  (was: Open)

> Improve error handling in Publish/Consume Kafka processors
> ----------------------------------------------------------
>
>                 Key: NIFI-4680
>                 URL: https://issues.apache.org/jira/browse/NIFI-4680
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Extensions
>            Reporter: Mark Payne
>            Assignee: Mark Payne
>             Fix For: 1.5.0
>
>
> While reviewing NIFI-4639, I encountered a couple of issues surrounding how NiFi handles errors in the Publish and Consume Kafka processors.
> If interacting with the Hortonworks Schema Registry and unable to connect to it, the Exception that was thrown was a RuntimeException instead of an IOException. This resulted in ConsumeKafkaRecord continuing to try to parse every record it received. This could cause it to take a very long time to stop the processor in such a case.
> On the publisher side if this happened, some flowfiles were transferred back to their original queues and then attempted to transfer to failure. As a result, the session would rollback instead of transferring anything to failure, and an error message would indicate that flowfile was already transferred.
> When attempting to rollback consumed records, a NPE was thrown if reading from the beginning of the topic (no offsets had been committed).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)