You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Muddam Pullaiah Yadav (Jira)" <ji...@apache.org> on 2021/08/09 09:42:00 UTC

[jira] [Comment Edited] (KAFKA-13163) MySQL Sink Connector - JsonConverter - DataException: Unknown schema type: null

    [ https://issues.apache.org/jira/browse/KAFKA-13163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17395944#comment-17395944 ] 

Muddam Pullaiah Yadav edited comment on KAFKA-13163 at 8/9/21, 9:41 AM:
------------------------------------------------------------------------

Hi [~cricket007] 

when we are using file sink connector , connector is running but data not able to insert. and when we are using JdbcSinkConnector we are getting above mention error. and when we enable schems=true am getting scheme and payload error 

Can you please help me on this issue.


was (Author: muddam0234):
Hi [~cricket007] 

when we are using file sink connector , connector is running but data not able to insert. and when we are using JdbcSinkConnector we are getting above mention error.

Can you please help me on this issue.

> MySQL Sink Connector - JsonConverter - DataException: Unknown schema type: null
> -------------------------------------------------------------------------------
>
>                 Key: KAFKA-13163
>                 URL: https://issues.apache.org/jira/browse/KAFKA-13163
>             Project: Kafka
>          Issue Type: Task
>          Components: KafkaConnect
>    Affects Versions: 2.1.1
>         Environment: PreProd
>            Reporter: Muddam Pullaiah Yadav
>            Priority: Major
>
> Please help with the following issue. Really appreciate it! 
>  
> We are using Azure HDInsight Kafka cluster 
> My sink Properties:
>  
> cat mysql-sink-connector
>  {
>  "name":"mysql-sink-connector",
>  "config":
> { "tasks.max":"2", "batch.size":"1000", "batch.max.rows":"1000", "poll.interval.ms":"500", "connector.class":"org.apache.kafka.connect.file.FileStreamSinkConnector", "connection.url":"jdbc:mysql://moddevdb.mysql.database.azure.com:3306/db_test_dev", "table.name":"db_test_dev.tbl_clients_merchants", "topics":"test", "connection.user":"grabmod", "connection.password":"#admin", "auto.create":"true", "auto.evolve":"true", "value.converter":"org.apache.kafka.connect.json.JsonConverter", "value.converter.schemas.enable":"false", "key.converter":"org.apache.kafka.connect.json.JsonConverter", "key.converter.schemas.enable":"true" }
> }
>  
> [2021-08-04 11:18:30,234] ERROR WorkerSinkTask\{id=mysql-sink-connector-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:177)
>  org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
>  at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
>  at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
>  at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:514)
>  at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:491)
>  at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
>  at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)
>  at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:194)
>  at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
>  at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  Caused by: org.apache.kafka.connect.errors.DataException: Unknown schema type: null
>  at org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:743)
>  at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:363)
>  at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:514)
>  at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
>  at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
>  ... 13 more
>  [2021-08-04 11:18:30,234] ERROR WorkerSinkTask\{id=mysql-sink-connector-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:178)
>  [2021-08-04 11:18:30,235] INFO [Consumer clientId=consumer-18, groupId=connect-mysql-sink-connector] Sending LeaveGroup request to coordinator wn2-grabde.fkgw2p1emuqu5d21xcbqrhqqbf.rx.internal.cloudapp.net:9092 (id: 2147482646 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:782)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)