You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Rahul Battacharya (Jira)" <ji...@apache.org> on 2020/02/05 21:17:00 UTC

[jira] [Created] (KAFKA-9508) Kafka connect handling of database length checks

Rahul Battacharya created KAFKA-9508:
----------------------------------------

             Summary: Kafka connect handling of database length checks
                 Key: KAFKA-9508
                 URL: https://issues.apache.org/jira/browse/KAFKA-9508
             Project: Kafka
          Issue Type: Bug
          Components: KafkaConnect
    Affects Versions: 1.1.0
            Reporter: Rahul Battacharya


Kafka connectors which interact with database specially sink connectors need to know how to handle field length mismatches. Most databases like oracle enforce field lengths but there is no way to enforce the same on Avro. We probably can write KSQL or stream jobs to filter out these records but the KSQL query can get very big and difficult to manage as there might be hundreds of fields in a table. An easier approach probably will be in the connect put method to filter out these records using the database metadata already available to the connector and then either discard these bad records or write them into a DLQ topic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)