You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Srikrishna Alla <al...@gmail.com> on 2016/11/29 00:52:38 UTC
Kafka Connect consumer not using the group.id provided in connect-distributed.properties
Hi,
I am using Kafka Connect Sink Connector to consume messages from a Kafka
topic in a secure Kafka cluster. I have provided the group.id in
connect-distributed.properties. I am using security.protocol as
SASL_PLAINTEXT.
Here is definition of group id in connect-distributed.properties -
[sa9726@clpd355 conf]$ less connect-distributed.properties |grep -i group
group.id=alert
In the Kafka Connect log, its picking up the group.id in the properties
file -
2016-11-28 15:49:40 INFO DistributedConfig : 165 - DistributedConfig
values:
request.timeout.ms = 40000
retry.backoff.ms = 100
Re: Kafka Connect consumer not using the group.id provided in connect-distributed.properties
Posted by Srikrishna Alla <al...@gmail.com>.
Hi,
I am using Kafka Connect Sink Connector to consume messages from a Kafka
topic in a secure Kafka cluster. I have provided the group.id in
connect-distributed.properties. I am using security.protocol as
SASL_PLAINTEXT.
Here is definition of group id in connect-distributed.properties -
bootstrap.servers=<server>:6667
*group.id <http://group.id>*=alert
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.topic=connect-offsets
offset.flush.interval.ms=10000
config.storage.topic=connect-configs
security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
producer.sasl.kerberos.service.name=kafka
producer.security.protocol=SASL_PLAINTEXT
producer.sasl.mechanism=GSSAPI
consumer.sasl.kerberos.service.name=kafka
consumer.security.protocol=SASL_PLAINTEXT
consumer.sasl.mechanism=GSSAPI
In the Kafka Connect log, its picking up the group.id in the properties
file -
2016-11-28 15:49:40 INFO DistributedConfig : 165 - DistributedConfig
values:
request.timeout.ms = 40000
retry.backoff.ms = 100
......
group.id = alert
metric.reporters = []
ssl.truststore.type = JKS
cluster = connect
But, when its instantiating a sink connector task, its using a new
consumer group -
2016-11-28 15:49:43 INFO ConsumerConfig : 165 - ConsumerConfig values:
request.timeout.ms = 40000
check.crcs = true
retry.backoff.ms = 100
......
group.id = connect-sink-connector
enable.auto.commit = false
metric.reporters = []
ssl.truststore.type = JKS
send.buffer.bytes = 131072
Why is this happening? I have not faced this issue on an unsecure cluster.
Is there any configuration property I am missing?
Thanks in advance for your help.
-Sri
On Mon, Nov 28, 2016 at 6:52 PM, Srikrishna Alla <al...@gmail.com>
wrote:
> Hi,
>
> I am using Kafka Connect Sink Connector to consume messages from a Kafka
> topic in a secure Kafka cluster. I have provided the group.id in
> connect-distributed.properties. I am using security.protocol as
> SASL_PLAINTEXT.
>
> Here is definition of group id in connect-distributed.properties -
> [sa9726@clpd355 conf]$ less connect-distributed.properties |grep -i group
> group.id=alert
>
> In the Kafka Connect log, its picking up the group.id in the properties
> file -
> 2016-11-28 15:49:40 INFO DistributedConfig : 165 - DistributedConfig
> values:
> request.timeout.ms = 40000
> retry.backoff.ms = 100
>
>
>
>