You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Aljoscha Krettek (Jira)" <ji...@apache.org> on 2020/05/29 09:24:00 UTC

[jira] [Commented] (FLINK-18017) have Kafka connector report metrics on null records

    [ https://issues.apache.org/jira/browse/FLINK-18017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17119435#comment-17119435 ] 

Aljoscha Krettek commented on FLINK-18017:
------------------------------------------

The deserialization schemas now offer an {{open()}} method that gives access to metrics. Also, the {{deserialize()}} method now gets a collector and can choose to not emit certain records. The combination of the two should cover your use case, right?

cc [~dwysakowicz]

> have Kafka connector report metrics on null records 
> ----------------------------------------------------
>
>                 Key: FLINK-18017
>                 URL: https://issues.apache.org/jira/browse/FLINK-18017
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / Kafka
>    Affects Versions: 1.9.1
>            Reporter: Yu Yang
>            Priority: Major
>
> Corrupted messages can get into the message pipeline for various reasons.  When a Flink deserializer fails to deserialize the message, and throw an exception due to corrupted message, the flink application will be blocked until we update the deserializer to handle the exception.  AbstractFetcher.emitRecordsWithTimestamps skips null records.  We need to add an metric on # of null records so that the users can measure # of null records that KafkaConnector encounters, and set up monitoring & alerting based on that. 
> [https://github.com/apache/flink/blob/1cd696d92c3e088a5bd8e5e11b54aacf46e92ae8/flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/AbstractFetcher.java#L350]
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)