You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by "Artem B." <a....@gmail.com> on 2017/06/15 10:00:14 UTC

Kafka Connect fails to read offset

Hi all,

Each time I start my source connector it fails to read the offset stored in
a file with the following error:

21:05:01:519 | ERROR | pool-1-thread-1 | o.a.k.c.s.OffsetStorageReaderImpl |
 CRITICAL: Failed to deserialize offset data when getting offsets for tas
k with namespace zohocrm-source-calls. No value for this data will be
returned, which may break the task or cause it to skip some data. This could
ei
ther be due to an error in the connector implementation or incompatible
schema.
org.apache.kafka.connect.errors.DataException: JsonConverter with
schemas.enable
requires "schema" and "payload" fields and may not contain additiona
l fields. If you are trying to deserialize plain JSON data, set schemas.
enable=false in your converter configuration.
        at org.apache.kafka.connect.json.JsonConverter.toConnectData(Js
onConverter.java:309)


When the connector is working though, the offsets are committed
successfully.

13:49:42:634 | INFO  | pool-5-thread-1 | o.a.k.c.r.WorkerSourceTask |
Finished WorkerSourceTask{id=zohocrm-source-calls-0} commitOffsets
successfully in 0 ms

However, when I stop and re-start it, the same error appears.

Here are my StandaloneConfig values:

        access.control.allow.methods =
        access.control.allow.origin =
        bootstrap.servers = [localhost:9092]
        internal.key.converter = class org.apache.kafka.connect.json.
JsonConverter
        internal.value.converter = class org.apache.kafka.connect.json.
JsonConverter
        key.converter = class io.confluent.connect.avro.AvroConverter
        offset.flush.interval.ms = 60000
        offset.flush.timeout.ms = 5000
        offset.storage.file.filename = maxoptra-data.offset
        rest.advertised.host.name = null
        rest.advertised.port = null
        rest.host.name = null
        rest.port = 8083
        task.shutdown.graceful.timeout.ms = 5000
        value.converter = class io.confluent.connect.avro.AvroConverter


Here is my connector config:

        connector.class = com.maxoptra.data.zoho.connect.
ZohoCrmSourceConnector
        key.converter = null
        name = zohocrm-source-calls
        tasks.max = 1
        transforms = null
        value.converter = null

Please advise.

Thank you

-- 
With best regards,
Artem.

Re: Kafka Connect fails to read offset

Posted by Randall Hauch <rh...@gmail.com>.
At any time, did your standalone worker config contain
`internal.value.converter.schemas.enable=false`?

On Thu, Jun 15, 2017 at 5:00 AM, Artem B. <a....@gmail.com> wrote:

> Hi all,
>
> Each time I start my source connector it fails to read the offset stored in
> a file with the following error:
>
> 21:05:01:519 | ERROR | pool-1-thread-1 | o.a.k.c.s.OffsetStorageReaderImpl
> |
>  CRITICAL: Failed to deserialize offset data when getting offsets for tas
> k with namespace zohocrm-source-calls. No value for this data will be
> returned, which may break the task or cause it to skip some data. This
> could
> ei
> ther be due to an error in the connector implementation or incompatible
> schema.
> org.apache.kafka.connect.errors.DataException: JsonConverter with
> schemas.enable
> requires "schema" and "payload" fields and may not contain additiona
> l fields. If you are trying to deserialize plain JSON data, set schemas.
> enable=false in your converter configuration.
>         at org.apache.kafka.connect.json.JsonConverter.toConnectData(Js
> onConverter.java:309)
>
>
> When the connector is working though, the offsets are committed
> successfully.
>
> 13:49:42:634 | INFO  | pool-5-thread-1 | o.a.k.c.r.WorkerSourceTask |
> Finished WorkerSourceTask{id=zohocrm-source-calls-0} commitOffsets
> successfully in 0 ms
>
> However, when I stop and re-start it, the same error appears.
>
> Here are my StandaloneConfig values:
>
>         access.control.allow.methods =
>         access.control.allow.origin =
>         bootstrap.servers = [localhost:9092]
>         internal.key.converter = class org.apache.kafka.connect.json.
> JsonConverter
>         internal.value.converter = class org.apache.kafka.connect.json.
> JsonConverter
>         key.converter = class io.confluent.connect.avro.AvroConverter
>         offset.flush.interval.ms = 60000
>         offset.flush.timeout.ms = 5000
>         offset.storage.file.filename = maxoptra-data.offset
>         rest.advertised.host.name = null
>         rest.advertised.port = null
>         rest.host.name = null
>         rest.port = 8083
>         task.shutdown.graceful.timeout.ms = 5000
>         value.converter = class io.confluent.connect.avro.AvroConverter
>
>
> Here is my connector config:
>
>         connector.class = com.maxoptra.data.zoho.connect.
> ZohoCrmSourceConnector
>         key.converter = null
>         name = zohocrm-source-calls
>         tasks.max = 1
>         transforms = null
>         value.converter = null
>
> Please advise.
>
> Thank you
>
> --
> With best regards,
> Artem.
>