You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Ewen Cheslack-Postava (JIRA)" <ji...@apache.org> on 2016/07/25 16:48:20 UTC

[jira] [Created] (KAFKA-3988) KafkaConfigBackingStore assumes configs will be stored as schemaless maps

Ewen Cheslack-Postava created KAFKA-3988:
--------------------------------------------

             Summary: KafkaConfigBackingStore assumes configs will be stored as schemaless maps
                 Key: KAFKA-3988
                 URL: https://issues.apache.org/jira/browse/KAFKA-3988
             Project: Kafka
          Issue Type: Bug
          Components: KafkaConnect
    Affects Versions: 0.10.0.0
            Reporter: Ewen Cheslack-Postava
            Assignee: Ewen Cheslack-Postava


If you use an internal key/value converter that drops schema information (as is the default in the config files we provide since we use JsonConverter with schemas.enable=false), the schemas we use that are structs get converted to maps since we don't know the structure to decode them to. Because our tests run with these settings, we haven't validated that the code works if schemas are preserved.

When they are preserved, we'll hit an error message like this
{quote}
[2016-07-25 07:36:34,828] ERROR Found connector configuration (connector-test-mysql-jdbc) in wrong format: class org.apache.kafka.connect.data.Struct (org.apache.kafka.connect.storage.KafkaConfigBackingStore:498)
{quote}
because the code currently checks that it is working with a map. We should actually be checking for either a Struct or a Map. This same problem probably affects a couple of other types of data in the same class as Connector configs, Task configs, Connect task lists, and target states are all Structs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)