You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Deepak Jain <de...@cumulus-systems.com> on 2022/06/02 15:22:40 UTC

Random continuous TimeoutException with Topic not present on one KafkaProducer out of many in multithreaded env

Hello Everyone,

We are using Kafka 2.8.1 Broker/Client system in our prod env.

Getting following exception randomly after 1 hour or so for one Realtime transfer from Kafka Producer to broker out of 5. (Rest 4 are working fine.)

java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic realtimeImport_1 not present in metadata after 250 ms.
                at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1316)
                at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:985)
                at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:885)
                at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:773)

We are using multithreaded KafkaProducer with their each unique topic sending data to single broker. Here, we notice that this exception comes when we reconnect to Kafka using close() (void org<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg>.apache<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache>.kafka<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka>.clients<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients>.producer<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer>.KafkaProducer<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProducer>.close()) and org<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg>.apache<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache>.kafka<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka>.clients<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients>.producer<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer>.KafkaProducer<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProducer>.KafkaProducer<Object<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProducer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.lang.Object>, Object<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProducer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.lang.Object>>(Properties<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProducer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.util.Properties> properties) calls. Not sure whether this is the culprit or not.

Due to this exception the realtime resources are not getting transfer to Kafka Consumer. We are using following config on Kafka Broker:

group.initial.rebalance.delay.ms=0
listeners=SASL_PLAINTEXT://0.0.0.0:9092
log.retention.minutes=15
delete.topic.enable=true
auto.create.topics.enable=true
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
security.protocol=SASL_PLAINTEXT
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
zookeeper.set.acl=true

Can anyone please help us in finding the root cause for it?

Regards,
Deepak Jain
Cumulus Systems

Re: Random continuous TimeoutException with Topic not present on one KafkaProducer out of many in multithreaded env

Posted by 张晓寅 <zh...@gmail.com>.
incream block time to 3 minutes ! it is an exception about update
metadata,try it!

On Tue, Jun 7, 2022 at 3:15 PM Deepak Jain <de...@cumulus-systems.com>
wrote:

> Hi Luke,
>
> The complete exception is
>
> java.util.concurrent.ExecutionException:
> org.apache.kafka.common.errors.TimeoutException: Topic realtimeImport_1 not
> present in metadata after 250 ms.
>                 at
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1316)
>                 at
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:985)
>                 at
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:885)
>                 at
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:773)
>
> Even though the topic is created and used but it still throws this
> exception and fails the operation.
>
> Regards,
> Deepak
>
> From: Luke Chen <sh...@gmail.com>
> Sent: 07 June 2022 11:46
> To: Deepak Jain <de...@cumulus-systems.com>
> Cc: users@kafka.apache.org
> Subject: Re: Random continuous TimeoutException with Topic not present on
> one KafkaProducer out of many in multithreaded env
>
> Caution: From Cumulus Systems – IT Department, this email originated from
> outside of the organization. Please call and confirm with the sender before
> opening attachments or clicking links inside the email.
>
> Hi Deepak,
>
> So, if you change the value in max.block.ms<http://max.block.ms> to
> default 1 minute, does the timeout exception still exist?
> I think the timeoutException is complaining the 250ms is not a good
> configuration for your environment.
>
> Thank you.
> Luke
>
> On Tue, Jun 7, 2022 at 11:23 AM Deepak Jain <
> deepak.jain@cumulus-systems.com<ma...@cumulus-systems.com>>
> wrote:
> Hi,
>
> Thanks for the quick reply.
>
> We are already using the config max.block.ms<http://max.block.ms>
> (alongwith with other recommended config like request.timeout.ms<
> http://request.timeout.ms> and others). Although the value we are using
> is very less at 250 ms but since we have 5 different KafkaProducer running
> in each individual thread out of which 4 are working without any issue and
> only 1 is throwing the TimeOutException, so this does not seems to be the
> issue,
>
> Please else us know if anybody had came across this type of behaviour by
> Kafka. If yes, please help in finding out the root cause and resolving it.
>
> Regards,
> Deepak
>
> -----Original Message-----
> From: 张晓寅 <zh...@gmail.com>>
> Sent: 06 June 2022 19:10
> To: users@kafka.apache.org<ma...@kafka.apache.org>
> Cc: Luke Chen <sh...@gmail.com>>
> Subject: Re: Random continuous TimeoutException with Topic not present on
> one KafkaProducer out of many in multithreaded env
>
> Caution: From Cumulus Systems – IT Department, this email originated from
> outside of the organization. Please call and confirm with the sender before
> opening attachments or clicking links inside the email.
>
>
> maybe you can add producer "max.block.ms<http://max.block.ms>" config,but
> you should test your broker look up some logs  about leader change
> ,producer performance,like traffic ,produce "buffer" and "batch.size"
>
> On Mon, Jun 6, 2022 at 6:53 PM Deepak Jain <
> deepak.jain@cumulus-systems.com<ma...@cumulus-systems.com>>
> wrote:
>
> > Hello All,
> >
> > Please help me out in this regard as the Customer has reported this on
> > their production environment and waiting for our reply ASAP.
> >
> > Regards,
> > Deepak
> >
> > From: Deepak Jain
> > Sent: 02 June 2022 20:53
> > To: 'users@kafka.apache.org<ma...@kafka.apache.org>' <
> users@kafka.apache.org<ma...@kafka.apache.org>>
> > Cc: 'Luke Chen' <sh...@gmail.com>>; Alap
> Patwardhan <
> > alap@cumulus-systems.com<ma...@cumulus-systems.com>>; Bhushan
> Patil <
> > bhushan.patil@cumulus-systems.com<mailto:
> bhushan.patil@cumulus-systems.com>>
> > Subject: Random continuous TimeoutException with Topic not present on
> > one KafkaProducer out of many in multithreaded env
> >
> > Hello Everyone,
> >
> > We are using Kafka 2.8.1 Broker/Client system in our prod env.
> >
> > Getting following exception randomly after 1 hour or so for one
> > Realtime transfer from Kafka Producer to broker out of 5. (Rest 4 are
> > working fine.)
> >
> > java.util.concurrent.ExecutionException:
> > org.apache.kafka.common.errors.TimeoutException: Topic
> > realtimeImport_1 not present in metadata after 250 ms.
> >                 at
> >
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1316)
> >                 at
> >
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:985)
> >                 at
> >
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:885)
> >                 at
> > org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.jav
> > a:773)
> >
> > We are using multithreaded KafkaProducer with their each unique topic
> > sending data to single broker. Here, we notice that this exception
> > comes when we reconnect to Kafka using close() (void
> > org<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5
> > C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg>
> > .apache<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/a
> > pp%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3C
> > org.apache>.kafka<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/
> > megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.
> > 8.1.jar%3Corg.apache.kafka>.clients<eclipse-javadoc:%E2%98%82=hdca/D:%
> > 5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5
> > C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients>.producer<eclipse
> > -javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%
> > 5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafk
> > a.clients.producer>.KafkaProducer<eclipse-javadoc:%E2%98%82=hdca/D:%5C
> > /git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/
> > kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProdu
> > cer.class%E2%98%83KafkaProducer>.close())
> > and
> > org<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5
> > C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg>
> > .apache<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/a
> > pp%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3C
> > org.apache>.kafka<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/
> > megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.
> > 8.1.jar%3Corg.apache.kafka>.clients<eclipse-javadoc:%E2%98%82=hdca/D:%
> > 5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5
> > C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients>.producer<eclipse
> > -javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%
> > 5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafk
> > a.clients.producer>.KafkaProducer<eclipse-javadoc:%E2%98%82=hdca/D:%5C
> > /git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/
> > kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProdu
> > cer.class%E2%98%83KafkaProducer>.KafkaProducer<Object<eclipse-javadoc:
> > %E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common
> > %5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients
> > .producer(KafkaProducer.class%E2%98%83KafkaProducer~KafkaProducer~Ljav
> > a.util.Properties;%E2%98%82java.lang.Object>,
> > Object<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/ap
> > p%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Co
> > rg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProd
> > ucer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.lang.Object>>(P
> > roperties<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C
> > /app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%
> > 3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaP
> > roducer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.util.Propert
> > ies>
> > properties) calls. Not sure whether this is the culprit or not.
> >
> > Due to this exception the realtime resources are not getting transfer
> > to Kafka Consumer. We are using following config on Kafka Broker:
> >
> > group.initial.rebalance.delay.ms<http://group.initial.rebalance.delay.ms
> >=0
> > listeners=SASL_PLAINTEXT://0.0.0.0:9092<http://0.0.0.0:9092>
> > log.retention.minutes=15
> > delete.topic.enable=true
> > auto.create.topics.enable=true
> > security.inter.broker.protocol=SASL_PLAINTEXT
> > sasl.mechanism.inter.broker.protocol=PLAIN
> > sasl.enabled.mechanisms=PLAIN
> > security.protocol=SASL_PLAINTEXT
> > authorizer.class.name<http://authorizer.class.name
> >=kafka.security.auth.SimpleAclAuthorizer
> > allow.everyone.if.no.acl.found=true
> > zookeeper.set.acl=true
> >
> > Can anyone please help us in finding the root cause for it?
> >
> > Regards,
> > Deepak Jain
> > Cumulus Systems
> >
>

RE: Random continuous TimeoutException with Topic not present on one KafkaProducer out of many in multithreaded env

Posted by Deepak Jain <de...@cumulus-systems.com>.
Hi Luke,

The complete exception is

java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic realtimeImport_1 not present in metadata after 250 ms.
                at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1316)
                at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:985)
                at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:885)
                at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:773)

Even though the topic is created and used but it still throws this exception and fails the operation.

Regards,
Deepak

From: Luke Chen <sh...@gmail.com>
Sent: 07 June 2022 11:46
To: Deepak Jain <de...@cumulus-systems.com>
Cc: users@kafka.apache.org
Subject: Re: Random continuous TimeoutException with Topic not present on one KafkaProducer out of many in multithreaded env

Caution: From Cumulus Systems – IT Department, this email originated from outside of the organization. Please call and confirm with the sender before opening attachments or clicking links inside the email.

Hi Deepak,

So, if you change the value in max.block.ms<http://max.block.ms> to default 1 minute, does the timeout exception still exist?
I think the timeoutException is complaining the 250ms is not a good configuration for your environment.

Thank you.
Luke

On Tue, Jun 7, 2022 at 11:23 AM Deepak Jain <de...@cumulus-systems.com>> wrote:
Hi,

Thanks for the quick reply.

We are already using the config max.block.ms<http://max.block.ms> (alongwith with other recommended config like request.timeout.ms<http://request.timeout.ms> and others). Although the value we are using is very less at 250 ms but since we have 5 different KafkaProducer running in each individual thread out of which 4 are working without any issue and only 1 is throwing the TimeOutException, so this does not seems to be the issue,

Please else us know if anybody had came across this type of behaviour by Kafka. If yes, please help in finding out the root cause and resolving it.

Regards,
Deepak

-----Original Message-----
From: 张晓寅 <zh...@gmail.com>>
Sent: 06 June 2022 19:10
To: users@kafka.apache.org<ma...@kafka.apache.org>
Cc: Luke Chen <sh...@gmail.com>>
Subject: Re: Random continuous TimeoutException with Topic not present on one KafkaProducer out of many in multithreaded env

Caution: From Cumulus Systems – IT Department, this email originated from outside of the organization. Please call and confirm with the sender before opening attachments or clicking links inside the email.


maybe you can add producer "max.block.ms<http://max.block.ms>" config,but you should test your broker look up some logs  about leader change ,producer performance,like traffic ,produce "buffer" and "batch.size"

On Mon, Jun 6, 2022 at 6:53 PM Deepak Jain <de...@cumulus-systems.com>>
wrote:

> Hello All,
>
> Please help me out in this regard as the Customer has reported this on
> their production environment and waiting for our reply ASAP.
>
> Regards,
> Deepak
>
> From: Deepak Jain
> Sent: 02 June 2022 20:53
> To: 'users@kafka.apache.org<ma...@kafka.apache.org>' <us...@kafka.apache.org>>
> Cc: 'Luke Chen' <sh...@gmail.com>>; Alap Patwardhan <
> alap@cumulus-systems.com<ma...@cumulus-systems.com>>; Bhushan Patil <
> bhushan.patil@cumulus-systems.com<ma...@cumulus-systems.com>>
> Subject: Random continuous TimeoutException with Topic not present on
> one KafkaProducer out of many in multithreaded env
>
> Hello Everyone,
>
> We are using Kafka 2.8.1 Broker/Client system in our prod env.
>
> Getting following exception randomly after 1 hour or so for one
> Realtime transfer from Kafka Producer to broker out of 5. (Rest 4 are
> working fine.)
>
> java.util.concurrent.ExecutionException:
> org.apache.kafka.common.errors.TimeoutException: Topic
> realtimeImport_1 not present in metadata after 250 ms.
>                 at
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1316)
>                 at
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:985)
>                 at
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:885)
>                 at
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.jav
> a:773)
>
> We are using multithreaded KafkaProducer with their each unique topic
> sending data to single broker. Here, we notice that this exception
> comes when we reconnect to Kafka using close() (void
> org<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5
> C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg>
> .apache<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/a
> pp%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3C
> org.apache>.kafka<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/
> megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.
> 8.1.jar%3Corg.apache.kafka>.clients<eclipse-javadoc:%E2%98%82=hdca/D:%
> 5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5
> C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients>.producer<eclipse
> -javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%
> 5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafk
> a.clients.producer>.KafkaProducer<eclipse-javadoc:%E2%98%82=hdca/D:%5C
> /git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/
> kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProdu
> cer.class%E2%98%83KafkaProducer>.close())
> and
> org<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5
> C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg>
> .apache<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/a
> pp%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3C
> org.apache>.kafka<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/
> megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.
> 8.1.jar%3Corg.apache.kafka>.clients<eclipse-javadoc:%E2%98%82=hdca/D:%
> 5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5
> C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients>.producer<eclipse
> -javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%
> 5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafk
> a.clients.producer>.KafkaProducer<eclipse-javadoc:%E2%98%82=hdca/D:%5C
> /git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/
> kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProdu
> cer.class%E2%98%83KafkaProducer>.KafkaProducer<Object<eclipse-javadoc:
> %E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common
> %5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients
> .producer(KafkaProducer.class%E2%98%83KafkaProducer~KafkaProducer~Ljav
> a.util.Properties;%E2%98%82java.lang.Object>,
> Object<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/ap
> p%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Co
> rg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProd
> ucer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.lang.Object>>(P
> roperties<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C
> /app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%
> 3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaP
> roducer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.util.Propert
> ies>
> properties) calls. Not sure whether this is the culprit or not.
>
> Due to this exception the realtime resources are not getting transfer
> to Kafka Consumer. We are using following config on Kafka Broker:
>
> group.initial.rebalance.delay.ms<http://group.initial.rebalance.delay.ms>=0
> listeners=SASL_PLAINTEXT://0.0.0.0:9092<http://0.0.0.0:9092>
> log.retention.minutes=15
> delete.topic.enable=true
> auto.create.topics.enable=true
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.mechanism.inter.broker.protocol=PLAIN
> sasl.enabled.mechanisms=PLAIN
> security.protocol=SASL_PLAINTEXT
> authorizer.class.name<http://authorizer.class.name>=kafka.security.auth.SimpleAclAuthorizer
> allow.everyone.if.no.acl.found=true
> zookeeper.set.acl=true
>
> Can anyone please help us in finding the root cause for it?
>
> Regards,
> Deepak Jain
> Cumulus Systems
>

Re: Random continuous TimeoutException with Topic not present on one KafkaProducer out of many in multithreaded env

Posted by Luke Chen <sh...@gmail.com>.
Hi Deepak,

So, if you change the value in max.block.ms to default 1 minute, does the
timeout exception still exist?
I think the timeoutException is complaining the 250ms is not a good
configuration for your environment.

Thank you.
Luke

On Tue, Jun 7, 2022 at 11:23 AM Deepak Jain <de...@cumulus-systems.com>
wrote:

> Hi,
>
> Thanks for the quick reply.
>
> We are already using the config max.block.ms (alongwith with other
> recommended config like request.timeout.ms and others). Although the
> value we are using is very less at 250 ms but since we have 5 different
> KafkaProducer running in each individual thread out of which 4 are working
> without any issue and only 1 is throwing the TimeOutException, so this does
> not seems to be the issue,
>
> Please else us know if anybody had came across this type of behaviour by
> Kafka. If yes, please help in finding out the root cause and resolving it.
>
> Regards,
> Deepak
>
> -----Original Message-----
> From: 张晓寅 <zh...@gmail.com>
> Sent: 06 June 2022 19:10
> To: users@kafka.apache.org
> Cc: Luke Chen <sh...@gmail.com>
> Subject: Re: Random continuous TimeoutException with Topic not present on
> one KafkaProducer out of many in multithreaded env
>
> Caution: From Cumulus Systems – IT Department, this email originated from
> outside of the organization. Please call and confirm with the sender before
> opening attachments or clicking links inside the email.
>
>
> maybe you can add producer "max.block.ms" config,but you should test your
> broker look up some logs  about leader change ,producer performance,like
> traffic ,produce "buffer" and "batch.size"
>
> On Mon, Jun 6, 2022 at 6:53 PM Deepak Jain <
> deepak.jain@cumulus-systems.com>
> wrote:
>
> > Hello All,
> >
> > Please help me out in this regard as the Customer has reported this on
> > their production environment and waiting for our reply ASAP.
> >
> > Regards,
> > Deepak
> >
> > From: Deepak Jain
> > Sent: 02 June 2022 20:53
> > To: 'users@kafka.apache.org' <us...@kafka.apache.org>
> > Cc: 'Luke Chen' <sh...@gmail.com>; Alap Patwardhan <
> > alap@cumulus-systems.com>; Bhushan Patil <
> > bhushan.patil@cumulus-systems.com>
> > Subject: Random continuous TimeoutException with Topic not present on
> > one KafkaProducer out of many in multithreaded env
> >
> > Hello Everyone,
> >
> > We are using Kafka 2.8.1 Broker/Client system in our prod env.
> >
> > Getting following exception randomly after 1 hour or so for one
> > Realtime transfer from Kafka Producer to broker out of 5. (Rest 4 are
> > working fine.)
> >
> > java.util.concurrent.ExecutionException:
> > org.apache.kafka.common.errors.TimeoutException: Topic
> > realtimeImport_1 not present in metadata after 250 ms.
> >                 at
> >
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1316)
> >                 at
> >
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:985)
> >                 at
> >
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:885)
> >                 at
> > org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.jav
> > a:773)
> >
> > We are using multithreaded KafkaProducer with their each unique topic
> > sending data to single broker. Here, we notice that this exception
> > comes when we reconnect to Kafka using close() (void
> > org<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5
> > C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg>
> > .apache<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/a
> > pp%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3C
> > org.apache>.kafka<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/
> > megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.
> > 8.1.jar%3Corg.apache.kafka>.clients<eclipse-javadoc:%E2%98%82=hdca/D:%
> > 5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5
> > C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients>.producer<eclipse
> > -javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%
> > 5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafk
> > a.clients.producer>.KafkaProducer<eclipse-javadoc:%E2%98%82=hdca/D:%5C
> > /git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/
> > kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProdu
> > cer.class%E2%98%83KafkaProducer>.close())
> > and
> > org<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5
> > C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg>
> > .apache<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/a
> > pp%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3C
> > org.apache>.kafka<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/
> > megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.
> > 8.1.jar%3Corg.apache.kafka>.clients<eclipse-javadoc:%E2%98%82=hdca/D:%
> > 5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5
> > C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients>.producer<eclipse
> > -javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%
> > 5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafk
> > a.clients.producer>.KafkaProducer<eclipse-javadoc:%E2%98%82=hdca/D:%5C
> > /git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/
> > kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProdu
> > cer.class%E2%98%83KafkaProducer>.KafkaProducer<Object<eclipse-javadoc:
> > %E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common
> > %5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients
> > .producer(KafkaProducer.class%E2%98%83KafkaProducer~KafkaProducer~Ljav
> > a.util.Properties;%E2%98%82java.lang.Object>,
> > Object<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/ap
> > p%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Co
> > rg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProd
> > ucer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.lang.Object>>(P
> > roperties<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C
> > /app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%
> > 3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaP
> > roducer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.util.Propert
> > ies>
> > properties) calls. Not sure whether this is the culprit or not.
> >
> > Due to this exception the realtime resources are not getting transfer
> > to Kafka Consumer. We are using following config on Kafka Broker:
> >
> > group.initial.rebalance.delay.ms=0
> > listeners=SASL_PLAINTEXT://0.0.0.0:9092
> > log.retention.minutes=15
> > delete.topic.enable=true
> > auto.create.topics.enable=true
> > security.inter.broker.protocol=SASL_PLAINTEXT
> > sasl.mechanism.inter.broker.protocol=PLAIN
> > sasl.enabled.mechanisms=PLAIN
> > security.protocol=SASL_PLAINTEXT
> > authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> > allow.everyone.if.no.acl.found=true
> > zookeeper.set.acl=true
> >
> > Can anyone please help us in finding the root cause for it?
> >
> > Regards,
> > Deepak Jain
> > Cumulus Systems
> >
>

RE: Random continuous TimeoutException with Topic not present on one KafkaProducer out of many in multithreaded env

Posted by Deepak Jain <de...@cumulus-systems.com>.
Hi,

Thanks for the quick reply.

We are already using the config max.block.ms (alongwith with other recommended config like request.timeout.ms and others). Although the value we are using is very less at 250 ms but since we have 5 different KafkaProducer running in each individual thread out of which 4 are working without any issue and only 1 is throwing the TimeOutException, so this does not seems to be the issue,

Please else us know if anybody had came across this type of behaviour by Kafka. If yes, please help in finding out the root cause and resolving it.

Regards,
Deepak

-----Original Message-----
From: 张晓寅 <zh...@gmail.com> 
Sent: 06 June 2022 19:10
To: users@kafka.apache.org
Cc: Luke Chen <sh...@gmail.com>
Subject: Re: Random continuous TimeoutException with Topic not present on one KafkaProducer out of many in multithreaded env

Caution: From Cumulus Systems – IT Department, this email originated from outside of the organization. Please call and confirm with the sender before opening attachments or clicking links inside the email.


maybe you can add producer "max.block.ms" config,but you should test your broker look up some logs  about leader change ,producer performance,like traffic ,produce "buffer" and "batch.size"

On Mon, Jun 6, 2022 at 6:53 PM Deepak Jain <de...@cumulus-systems.com>
wrote:

> Hello All,
>
> Please help me out in this regard as the Customer has reported this on 
> their production environment and waiting for our reply ASAP.
>
> Regards,
> Deepak
>
> From: Deepak Jain
> Sent: 02 June 2022 20:53
> To: 'users@kafka.apache.org' <us...@kafka.apache.org>
> Cc: 'Luke Chen' <sh...@gmail.com>; Alap Patwardhan < 
> alap@cumulus-systems.com>; Bhushan Patil < 
> bhushan.patil@cumulus-systems.com>
> Subject: Random continuous TimeoutException with Topic not present on 
> one KafkaProducer out of many in multithreaded env
>
> Hello Everyone,
>
> We are using Kafka 2.8.1 Broker/Client system in our prod env.
>
> Getting following exception randomly after 1 hour or so for one 
> Realtime transfer from Kafka Producer to broker out of 5. (Rest 4 are 
> working fine.)
>
> java.util.concurrent.ExecutionException:
> org.apache.kafka.common.errors.TimeoutException: Topic 
> realtimeImport_1 not present in metadata after 250 ms.
>                 at
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1316)
>                 at
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:985)
>                 at
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:885)
>                 at
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.jav
> a:773)
>
> We are using multithreaded KafkaProducer with their each unique topic 
> sending data to single broker. Here, we notice that this exception 
> comes when we reconnect to Kafka using close() (void
> org<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5
> C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg>
> .apache<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/a
> pp%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3C
> org.apache>.kafka<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/
> megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.
> 8.1.jar%3Corg.apache.kafka>.clients<eclipse-javadoc:%E2%98%82=hdca/D:%
> 5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5
> C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients>.producer<eclipse
> -javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%
> 5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafk
> a.clients.producer>.KafkaProducer<eclipse-javadoc:%E2%98%82=hdca/D:%5C
> /git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/
> kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProdu
> cer.class%E2%98%83KafkaProducer>.close())
> and
> org<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5
> C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg>
> .apache<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/a
> pp%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3C
> org.apache>.kafka<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/
> megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.
> 8.1.jar%3Corg.apache.kafka>.clients<eclipse-javadoc:%E2%98%82=hdca/D:%
> 5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5
> C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients>.producer<eclipse
> -javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%
> 5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafk
> a.clients.producer>.KafkaProducer<eclipse-javadoc:%E2%98%82=hdca/D:%5C
> /git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/
> kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProdu
> cer.class%E2%98%83KafkaProducer>.KafkaProducer<Object<eclipse-javadoc:
> %E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common
> %5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients
> .producer(KafkaProducer.class%E2%98%83KafkaProducer~KafkaProducer~Ljav
> a.util.Properties;%E2%98%82java.lang.Object>,
> Object<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/ap
> p%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Co
> rg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProd
> ucer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.lang.Object>>(P
> roperties<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C
> /app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%
> 3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaP
> roducer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.util.Propert
> ies>
> properties) calls. Not sure whether this is the culprit or not.
>
> Due to this exception the realtime resources are not getting transfer 
> to Kafka Consumer. We are using following config on Kafka Broker:
>
> group.initial.rebalance.delay.ms=0
> listeners=SASL_PLAINTEXT://0.0.0.0:9092
> log.retention.minutes=15
> delete.topic.enable=true
> auto.create.topics.enable=true
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.mechanism.inter.broker.protocol=PLAIN
> sasl.enabled.mechanisms=PLAIN
> security.protocol=SASL_PLAINTEXT
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> allow.everyone.if.no.acl.found=true
> zookeeper.set.acl=true
>
> Can anyone please help us in finding the root cause for it?
>
> Regards,
> Deepak Jain
> Cumulus Systems
>

Re: Random continuous TimeoutException with Topic not present on one KafkaProducer out of many in multithreaded env

Posted by 张晓寅 <zh...@gmail.com>.
maybe you can add producer "max.block.ms" config,but you should test your
broker look up some logs  about leader change ,producer performance,like
traffic ,produce "buffer" and "batch.size"

On Mon, Jun 6, 2022 at 6:53 PM Deepak Jain <de...@cumulus-systems.com>
wrote:

> Hello All,
>
> Please help me out in this regard as the Customer has reported this on
> their production environment and waiting for our reply ASAP.
>
> Regards,
> Deepak
>
> From: Deepak Jain
> Sent: 02 June 2022 20:53
> To: 'users@kafka.apache.org' <us...@kafka.apache.org>
> Cc: 'Luke Chen' <sh...@gmail.com>; Alap Patwardhan <
> alap@cumulus-systems.com>; Bhushan Patil <
> bhushan.patil@cumulus-systems.com>
> Subject: Random continuous TimeoutException with Topic not present on one
> KafkaProducer out of many in multithreaded env
>
> Hello Everyone,
>
> We are using Kafka 2.8.1 Broker/Client system in our prod env.
>
> Getting following exception randomly after 1 hour or so for one Realtime
> transfer from Kafka Producer to broker out of 5. (Rest 4 are working fine.)
>
> java.util.concurrent.ExecutionException:
> org.apache.kafka.common.errors.TimeoutException: Topic realtimeImport_1 not
> present in metadata after 250 ms.
>                 at
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1316)
>                 at
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:985)
>                 at
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:885)
>                 at
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:773)
>
> We are using multithreaded KafkaProducer with their each unique topic
> sending data to single broker. Here, we notice that this exception comes
> when we reconnect to Kafka using close() (void
> org<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg>.apache<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache>.kafka<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka>.clients<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients>.producer<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer>.KafkaProducer<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProducer>.close())
> and
> org<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg>.apache<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache>.kafka<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka>.clients<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients>.producer<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer>.KafkaProducer<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProducer>.KafkaProducer<Object<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProducer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.lang.Object>,
> Object<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProducer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.lang.Object>>(Properties<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProducer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.util.Properties>
> properties) calls. Not sure whether this is the culprit or not.
>
> Due to this exception the realtime resources are not getting transfer to
> Kafka Consumer. We are using following config on Kafka Broker:
>
> group.initial.rebalance.delay.ms=0
> listeners=SASL_PLAINTEXT://0.0.0.0:9092
> log.retention.minutes=15
> delete.topic.enable=true
> auto.create.topics.enable=true
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.mechanism.inter.broker.protocol=PLAIN
> sasl.enabled.mechanisms=PLAIN
> security.protocol=SASL_PLAINTEXT
> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> allow.everyone.if.no.acl.found=true
> zookeeper.set.acl=true
>
> Can anyone please help us in finding the root cause for it?
>
> Regards,
> Deepak Jain
> Cumulus Systems
>

RE: Random continuous TimeoutException with Topic not present on one KafkaProducer out of many in multithreaded env

Posted by Deepak Jain <de...@cumulus-systems.com>.
Hello All,

Please help me out in this regard as the Customer has reported this on their production environment and waiting for our reply ASAP.

Regards,
Deepak

From: Deepak Jain
Sent: 02 June 2022 20:53
To: 'users@kafka.apache.org' <us...@kafka.apache.org>
Cc: 'Luke Chen' <sh...@gmail.com>; Alap Patwardhan <al...@cumulus-systems.com>; Bhushan Patil <bh...@cumulus-systems.com>
Subject: Random continuous TimeoutException with Topic not present on one KafkaProducer out of many in multithreaded env

Hello Everyone,

We are using Kafka 2.8.1 Broker/Client system in our prod env.

Getting following exception randomly after 1 hour or so for one Realtime transfer from Kafka Producer to broker out of 5. (Rest 4 are working fine.)

java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic realtimeImport_1 not present in metadata after 250 ms.
                at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1316)
                at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:985)
                at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:885)
                at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:773)

We are using multithreaded KafkaProducer with their each unique topic sending data to single broker. Here, we notice that this exception comes when we reconnect to Kafka using close() (void org<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg>.apache<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache>.kafka<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka>.clients<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients>.producer<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer>.KafkaProducer<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProducer>.close()) and org<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg>.apache<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache>.kafka<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka>.clients<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients>.producer<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer>.KafkaProducer<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProducer>.KafkaProducer<Object<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProducer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.lang.Object>, Object<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProducer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.lang.Object>>(Properties<eclipse-javadoc:%E2%98%82=hdca/D:%5C/git%5C/hdca%5C/megha%5C/app%5C/platform%5C/common%5C/tools%5C/lib%5C/kafka-clients-2.8.1.jar%3Corg.apache.kafka.clients.producer(KafkaProducer.class%E2%98%83KafkaProducer~KafkaProducer~Ljava.util.Properties;%E2%98%82java.util.Properties> properties) calls. Not sure whether this is the culprit or not.

Due to this exception the realtime resources are not getting transfer to Kafka Consumer. We are using following config on Kafka Broker:

group.initial.rebalance.delay.ms=0
listeners=SASL_PLAINTEXT://0.0.0.0:9092
log.retention.minutes=15
delete.topic.enable=true
auto.create.topics.enable=true
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
security.protocol=SASL_PLAINTEXT
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
zookeeper.set.acl=true

Can anyone please help us in finding the root cause for it?

Regards,
Deepak Jain
Cumulus Systems