You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Ewen Cheslack-Postava <ew...@confluent.io> on 2016/12/05 06:11:44 UTC

Re: FW: mirror and schema topics

Can you give more details about how you're setting up your mirror? It
sounds like you're simply missing the __schemas topic, but it's hard to
determine the problem without more details about your mirroring setup.

-Ewen

On Wed, Nov 30, 2016 at 12:03 PM, Berryman, Eric <be...@frib.msu.edu>
wrote:

>
> Hello!
>
> I'm trying to mirror a kafka cluster, then run connect on the mirror.
> It seems the schemas are not getting moved in the mirror though, so I get
> the following error.
> Is this a configuration problem?
>
> Thank you for the help!
>
>
> Mirror>curl -X GET http://localhost:8081/subjects
> []
>
> Machine1>curl -X GET http://localhost:8081/subjects
> ["cts_olog_logbooks-value","cts_olog_logs_logbooks-value","
> cts_olog_entries-value","cts_olog_bitemporal_log-value","cts
> _olog_logs-value"]
>
> Mirror:
> [2016-11-30 13:12:07,612] ERROR Task cts-olog-bi-jdbc-sink-0 threw an
> uncaught and unrecoverable exception (org.apache.kafka.connect.runt
> ime.WorkerTask:142)
> org.apache.kafka.connect.errors.DataException: Failed to deserialize data
> to Avro:
>         at io.confluent.connect.avro.AvroConverter.toConnectData(AvroCo
> nverter.java:109)
>         at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessa
> ges(WorkerSinkTask.java:358)
>         at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerS
> inkTask.java:239)
>         at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(
> WorkerSinkTask.java:172)
>         at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(
> WorkerSinkTask.java:143)
>         at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask
> .java:140)
>         at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.
> java:175)
>         at java.util.concurrent.Executors$RunnableAdapter.call(
> Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
> Executor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
> lExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.kafka.common.errors.SerializationException: Error
> retrieving Avro schema for id 43
> Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException:
> Schema not found; error code: 40403
>         at io.confluent.kafka.schemaregistry.client.rest.RestService.
> sendHttpRequest(RestService.java:170)
>         at io.confluent.kafka.schemaregistry.client.rest.RestService.
> httpRequest(RestService.java:187)
>         at io.confluent.kafka.schemaregistry.client.rest.RestService.
> getId(RestService.java:323)
>         at io.confluent.kafka.schemaregistry.client.rest.RestService.
> getId(RestService.java:316)
>         at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistr
> yClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:63)
>         at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistr
> yClient.getBySubjectAndID(CachedSchemaRegistryClient.java:118)
>         at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer
> .deserialize(AbstractKafkaAvroDeserializer.java:121)
>         at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer
> .deserializeWithSchemaAndVersion(AbstractKafkaAvroDeserializer.java:190)
>         at io.confluent.connect.avro.AvroConverter$Deserializer.deseria
> lize(AvroConverter.java:130)
>         at io.confluent.connect.avro.AvroConverter.toConnectData(AvroCo
> nverter.java:99)
>         at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessa
> ges(WorkerSinkTask.java:358)
>         at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerS
> inkTask.java:239)
>         at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(
> WorkerSinkTask.java:172)
>         at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(
> WorkerSinkTask.java:143)
>         at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask
> .java:140)
>         at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.
> java:175)
>         at java.util.concurrent.Executors$RunnableAdapter.call(
> Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
> Executor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
> lExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> [2016-11-30 13:12:07,637] ERROR Task is being killed and will not recover
> until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:143)
> [2016-11-30 13:12:07,637] INFO Stopping task (io.confluent.connect.jdbc.sin
> k.JdbcSinkTask:88)
>



-- 
Thanks,
Ewen