You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Gary Struthers <ag...@earthlink.net> on 2016/02/17 04:12:29 UTC
0.9 client AbstractCoordinator - Attempt to join group failed due to obsolete coordinator information
Hi,
My local Java client consumer and producer fail with log messages I don’t understand. What does "obsolete coordinator information” mean?
2016-02-16 18:49:01,795 INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 0.9.0.0
2016-02-16 18:49:01,795 INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : fc7243c2af4b2b4a
2016-02-16 18:49:02,756 INFO o.a.k.c.c.i.AbstractCoordinator - Marking the coordinator 2147483647 dead.
2016-02-16 18:49:02,759 INFO o.a.k.c.c.i.AbstractCoordinator - Attempt to join group dendrites-group failed due to obsolete coordinator information, retrying.
2016-02-16 18:50:01,881 INFO o.a.k.clients.producer.KafkaProducer - Closing the Kafka producer with timeoutMillis = 100 ms.
Here are the logged configs
2016-02-16 18:49:01,093 INFO o.a.k.c.producer.ProducerConfig - ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 60000
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
client.id =
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
request.timeout.ms = 30000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = all
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 0
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
linger.ms = 1
2016-02-16 18:49:01,217 INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
group.id = dendrites-group
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [localhost:9092]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
enable.auto.commit = false
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
session.timeout.ms = 30000
metrics.num.samples = 2
client.id =
ssl.endpoint.identification.algorithm = null
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ssl.protocol = TLS
check.crcs = true
request.timeout.ms = 40000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 1000
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
fetch.min.bytes = 1024
send.buffer.bytes = 131072
auto.offset.reset = latest
thanks,
Gary
Re: 0.9 client AbstractCoordinator - Attempt to join group failed due
to obsolete coordinator information
Posted by Jason Gustafson <ja...@confluent.io>.
Hi Gary,
The coordinator is a special broker which is chosen for each consumer group
to manage its state. It facilitates group membership, partition assignment
and offset commits. If the coordinator is shutdown, then Kafka will choose
another broker to assume the role. The log message might be a little
unclear, but basically it is saying that the broker the consumer thought
was its coordinator no longer is (maybe because it's shutting down). Note
this is not an error: the consumer will rediscover the new coordinator and
continue happily along.
Thanks,
Jason
On Tue, Feb 16, 2016 at 7:12 PM, Gary Struthers <ag...@earthlink.net>
wrote:
> Hi,
>
> My local Java client consumer and producer fail with log messages I don’t
> understand. What does "obsolete coordinator information” mean?
>
> 2016-02-16 18:49:01,795 INFO o.a.kafka.common.utils.AppInfoParser -
> Kafka version : 0.9.0.0
> 2016-02-16 18:49:01,795 INFO o.a.kafka.common.utils.AppInfoParser -
> Kafka commitId : fc7243c2af4b2b4a
> 2016-02-16 18:49:02,756 INFO o.a.k.c.c.i.AbstractCoordinator - Marking
> the coordinator 2147483647 dead.
> 2016-02-16 18:49:02,759 INFO o.a.k.c.c.i.AbstractCoordinator - Attempt
> to join group dendrites-group failed due to obsolete coordinator
> information, retrying.
> 2016-02-16 18:50:01,881 INFO o.a.k.clients.producer.KafkaProducer -
> Closing the Kafka producer with timeoutMillis = 100 ms.
>
> Here are the logged configs
> 2016-02-16 18:49:01,093 INFO o.a.k.c.producer.ProducerConfig -
> ProducerConfig values:
> compression.type = none
> metric.reporters = []
> metadata.max.age.ms = 300000
> metadata.fetch.timeout.ms = 60000
> reconnect.backoff.ms = 50
> sasl.kerberos.ticket.renew.window.factor = 0.8
> bootstrap.servers = [localhost:9092]
> retry.backoff.ms = 100
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> buffer.memory = 33554432
> timeout.ms = 30000
> key.serializer = class
> org.apache.kafka.common.serialization.StringSerializer
> sasl.kerberos.service.name = null
> sasl.kerberos.ticket.renew.jitter = 0.05
> ssl.keystore.type = JKS
> ssl.trustmanager.algorithm = PKIX
> block.on.buffer.full = false
> ssl.key.password = null
> max.block.ms = 60000
> sasl.kerberos.min.time.before.relogin = 60000
> connections.max.idle.ms = 540000
> ssl.truststore.password = null
> max.in.flight.requests.per.connection = 5
> metrics.num.samples = 2
> client.id =
> ssl.endpoint.identification.algorithm = null
> ssl.protocol = TLS
> request.timeout.ms = 30000
> ssl.provider = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> acks = all
> batch.size = 16384
> ssl.keystore.location = null
> receive.buffer.bytes = 32768
> ssl.cipher.suites = null
> ssl.truststore.type = JKS
> security.protocol = PLAINTEXT
> retries = 0
> max.request.size = 1048576
> value.serializer = class
> org.apache.kafka.common.serialization.ByteArraySerializer
> ssl.truststore.location = null
> ssl.keystore.password = null
> ssl.keymanager.algorithm = SunX509
> metrics.sample.window.ms = 30000
> partitioner.class = class
> org.apache.kafka.clients.producer.internals.DefaultPartitioner
> send.buffer.bytes = 131072
> linger.ms = 1
>
> 2016-02-16 18:49:01,217 INFO o.a.k.c.consumer.ConsumerConfig -
> ConsumerConfig values:
> metric.reporters = []
> metadata.max.age.ms = 300000
> value.deserializer = class
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> group.id = dendrites-group
> partition.assignment.strategy =
> [org.apache.kafka.clients.consumer.RangeAssignor]
> reconnect.backoff.ms = 50
> sasl.kerberos.ticket.renew.window.factor = 0.8
> max.partition.fetch.bytes = 1048576
> bootstrap.servers = [localhost:9092]
> retry.backoff.ms = 100
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> sasl.kerberos.service.name = null
> sasl.kerberos.ticket.renew.jitter = 0.05
> ssl.keystore.type = JKS
> ssl.trustmanager.algorithm = PKIX
> enable.auto.commit = false
> ssl.key.password = null
> fetch.max.wait.ms = 500
> sasl.kerberos.min.time.before.relogin = 60000
> connections.max.idle.ms = 540000
> ssl.truststore.password = null
> session.timeout.ms = 30000
> metrics.num.samples = 2
> client.id =
> ssl.endpoint.identification.algorithm = null
> key.deserializer = class
> org.apache.kafka.common.serialization.StringDeserializer
> ssl.protocol = TLS
> check.crcs = true
> request.timeout.ms = 40000
> ssl.provider = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> ssl.keystore.location = null
> heartbeat.interval.ms = 3000
> auto.commit.interval.ms = 1000
> receive.buffer.bytes = 32768
> ssl.cipher.suites = null
> ssl.truststore.type = JKS
> security.protocol = PLAINTEXT
> ssl.truststore.location = null
> ssl.keystore.password = null
> ssl.keymanager.algorithm = SunX509
> metrics.sample.window.ms = 30000
> fetch.min.bytes = 1024
> send.buffer.bytes = 131072
> auto.offset.reset = latest
>
> thanks,
> Gary