You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Sönke Liebau (JIRA)" <ji...@apache.org> on 2018/12/18 13:01:00 UTC

[jira] [Commented] (KAFKA-7749) confluent does not provide option to set consumer properties at connector level

    [ https://issues.apache.org/jira/browse/KAFKA-7749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724042#comment-16724042 ] 

Sönke Liebau commented on KAFKA-7749:
-------------------------------------

 Hi [~mduhan], 

 

this is closely related to a discussion that is currently taking place on the dev mailing list in the [MirrorMaker 2.0|https://lists.apache.org/thread.html/12c7171d957f3ca4f809b6365e788d7fa9715f4f41c3a554c6529761@%3Cdev.kafka.apache.org%3E] thread - it might be worthwhile chiming in there as well.

 

Quick question on what you wrote: you list a couple of configuration settings, is your code restricted to these settings, or does your code allow for arbitrary settings to be passed to a consumer and these are just the ones that you found useful?

Also, does the same general principle also apply to producer code?

> confluent does not provide option to set consumer properties at connector level
> -------------------------------------------------------------------------------
>
>                 Key: KAFKA-7749
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7749
>             Project: Kafka
>          Issue Type: Improvement
>          Components: KafkaConnect
>            Reporter: Manjeet Duhan
>            Priority: Major
>
> _We want to increase consumer.max.poll.record to increase performance but this  value can only be set in worker properties which is applicable to all connectors given cluster._
>  __ 
> _Operative Situation :- We have one project which is communicating with Elasticsearch and we set consumer.max.poll.record=500 after multiple performance tests which worked fine for an year._
>  _Then one more project onboarded in the same cluster which required consumer.max.poll.record=5000 based on their performance tests. This configuration is moved to production._
>   _Admetric started failing as it was taking more than 5 minutes to process 5000 polled records and started throwing commitfailed exception which is vicious cycle as it will process same data over and over again._
>  __ 
> _We can control above if start consumer using plain java but this control was not available at each consumer level in confluent connector._
> _I have overridden kafka code to accept connector properties which will be applied to single connector and others will keep on using default properties . These changes are already running in production for more than 5 months._
> _Some of the properties which were useful for us._
> max.poll.records
> max.poll.interval.ms
> request.timeout.ms
> key.deserializer
> value.deserializer
> heartbeat.interval.ms
> session.timeout.ms
> auto.offset.reset
> connections.max.idle.ms
> enable.auto.commit
>  
> auto.commit.interval.ms
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)