You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Pulkit Manchanda <pu...@gmail.com> on 2018/08/17 15:55:48 UTC

NetworkException exception while send/publishing records(Producer)

Hi All,

I am sending the multiple records to the same topic.
I have the two approaches
1)Sharing the producer with all the threads
2) creating a new producer for every thread.

I am sending the records of size ~150Mb on multiple request.
I am running the cluster and app on my local machine with 3 brokers and
max.request .size 1Gb.

While sending the records using the following code with approach 2)
creating a new producer I am getting the network exception
and when I use the approach 1) sharing the producer. I get the same network
exception and sometimes Timeout too.
I looked onto google and StackOverflow but didn't find any solution to the
Network Exception.

val metadata = producer.send(record).get()


java.util.concurrent.ExecutionException:
org.apache.kafka.common.errors.NetworkException: The server disconnected
before a response was received.
at
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
at
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
at
org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
at service.KafkaService.sendRecordToKafka(KafkaService.scala:65)


Any help will be appreciated.

Thanks
Pulkit

Re: NetworkException exception while send/publishing records(Producer)

Posted by Pulkit Manchanda <pu...@gmail.com>.
Thanks Shantanu for your response.
The size is the business req. which might also increase later and I am
using max.request .size as 1Gb
I will try the compression of the data and see the performance.

and sharing the producer blocks the other threads as the data is big and
also it leads to resource leakage.

Pulkit

On Mon, Aug 20, 2018 at 1:23 AM, Shantanu Deshmukh <sh...@gmail.com>
wrote:

> Firstly, record size of 150mb is too big. I am quite sure your timeout
> exceptions are due to such a large record. There is a setting in producer
> and broker config which allows you to specify max message size in bytes.
> But still records each of size 150mb might lead to problems with increasing
> volume. You need to look at how you can reduce your message size.
>
> Kafka producer is thread safe and according to documentation you will get
> best performance if you share producer with multiple threads. Don't
> initiate a new kafka producer for each of your thread.
>
> On Fri, Aug 17, 2018 at 9:26 PM Pulkit Manchanda <pu...@gmail.com>
> wrote:
>
> > Hi All,
> >
> > I am sending the multiple records to the same topic.
> > I have the two approaches
> > 1)Sharing the producer with all the threads
> > 2) creating a new producer for every thread.
> >
> > I am sending the records of size ~150Mb on multiple request.
> > I am running the cluster and app on my local machine with 3 brokers and
> > max.request .size 1Gb.
> >
> > While sending the records using the following code with approach 2)
> > creating a new producer I am getting the network exception
> > and when I use the approach 1) sharing the producer. I get the same
> network
> > exception and sometimes Timeout too.
> > I looked onto google and StackOverflow but didn't find any solution to
> the
> > Network Exception.
> >
> > val metadata = producer.send(record).get()
> >
> >
> > java.util.concurrent.ExecutionException:
> > org.apache.kafka.common.errors.NetworkException: The server disconnected
> > before a response was received.
> > at
> >
> > org.apache.kafka.clients.producer.internals.FutureRecordMetadata.
> valueOrError(FutureRecordMetadata.java:94)
> > at
> >
> > org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(
> FutureRecordMetadata.java:64)
> > at
> >
> > org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(
> FutureRecordMetadata.java:29)
> > at service.KafkaService.sendRecordToKafka(KafkaService.scala:65)
> >
> >
> > Any help will be appreciated.
> >
> > Thanks
> > Pulkit
> >
>

Re: NetworkException exception while send/publishing records(Producer)

Posted by Shantanu Deshmukh <sh...@gmail.com>.
Firstly, record size of 150mb is too big. I am quite sure your timeout
exceptions are due to such a large record. There is a setting in producer
and broker config which allows you to specify max message size in bytes.
But still records each of size 150mb might lead to problems with increasing
volume. You need to look at how you can reduce your message size.

Kafka producer is thread safe and according to documentation you will get
best performance if you share producer with multiple threads. Don't
initiate a new kafka producer for each of your thread.

On Fri, Aug 17, 2018 at 9:26 PM Pulkit Manchanda <pu...@gmail.com>
wrote:

> Hi All,
>
> I am sending the multiple records to the same topic.
> I have the two approaches
> 1)Sharing the producer with all the threads
> 2) creating a new producer for every thread.
>
> I am sending the records of size ~150Mb on multiple request.
> I am running the cluster and app on my local machine with 3 brokers and
> max.request .size 1Gb.
>
> While sending the records using the following code with approach 2)
> creating a new producer I am getting the network exception
> and when I use the approach 1) sharing the producer. I get the same network
> exception and sometimes Timeout too.
> I looked onto google and StackOverflow but didn't find any solution to the
> Network Exception.
>
> val metadata = producer.send(record).get()
>
>
> java.util.concurrent.ExecutionException:
> org.apache.kafka.common.errors.NetworkException: The server disconnected
> before a response was received.
> at
>
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> at
>
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
> at
>
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> at service.KafkaService.sendRecordToKafka(KafkaService.scala:65)
>
>
> Any help will be appreciated.
>
> Thanks
> Pulkit
>