You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Dana Powers <da...@gmail.com> on 2016/08/25 16:43:51 UTC

Re: Kafka Producer performance - 400GB of transfer on single instance taking > 72 hours?

python is generally restricted to a single CPU, and kafka-python will max
out a single CPU well before it maxes a network card. I would recommend
other tools for bulk transfers. Otherwise you may find that partitioning
your data set and running separate python processes for each will increase
the overall CPU available and therefore the throughput.

One day I will spend time improving the CPU performance of kafka-python,
but probably not in the near term.

-Dana

Re: Kafka Producer performance - 400GB of transfer on single instance taking > 72 hours?

Posted by Sharninder <sh...@gmail.com>.
I think what Dana is suggesting is that since Python isn't doing a good job
utilising all the available CPU power, you could run multiple python
processes to process the load. Divide the mongodb collection between, say,
4 parts and process each part with one python process. On kafka side.

Or use a multi threaded java producer that is able to use the machine
optimally.


On Thu, Aug 25, 2016 at 10:21 PM, Dominik Safaric <do...@gmail.com>
wrote:

> Dear Dana,
>
> > I would recommend
> > other tools for bulk transfers.
>
>
> What tools/languages would you rather recommend then using Python?
>
> I could for sure accomplish the same by using the native Java Kafka
> Producer API, but should this really affect the performance under the
> assumption that the Kafka configuration stays as is?
>
> > On 25 Aug 2016, at 18:43, Dana Powers <da...@gmail.com> wrote:
> >
> > python is generally restricted to a single CPU, and kafka-python will max
> > out a single CPU well before it maxes a network card. I would recommend
> > other tools for bulk transfers. Otherwise you may find that partitioning
> > your data set and running separate python processes for each will
> increase
> > the overall CPU available and therefore the throughput.
> >
> > One day I will spend time improving the CPU performance of kafka-python,
> > but probably not in the near term.
> >
> > -Dana
>
>


-- 
--
Sharninder

Re: Kafka Producer performance - 400GB of transfer on single instance taking > 72 hours?

Posted by Dana Powers <da...@gmail.com>.
kafka-python includes some benchmarking scripts in
https://github.com/dpkp/kafka-python/tree/master/benchmarks

The concurrency and execution model of the JVM are both significantly
different than python. I would definitely recommend some background reading
on CPython GIL if you are interested on python threads being restricted to
a single CPU.

-Dana

On Thu, Aug 25, 2016 at 9:53 AM, Tauzell, Dave <Da...@surescripts.com>
wrote:
> I would write a python client that writes dummy data to kafka to measure
how fast you can write to Kafka without MongoDB in the mix. I've been doing
load testing recently can with 3 brokers I can write 100MB/s (using Java
clients).
>
> -Dave
>
> -----Original Message-----
> From: Dominik Safaric [mailto:dominiksafaric@gmail.com]
> Sent: Thursday, August 25, 2016 11:51 AM
> To: users@kafka.apache.org
> Subject: Re: Kafka Producer performance - 400GB of transfer on single
instance taking > 72 hours?
>
> Dear Dana,
>
>> I would recommend
>> other tools for bulk transfers.
>
>
> What tools/languages would you rather recommend then using Python?
>
> I could for sure accomplish the same by using the native Java Kafka
Producer API, but should this really affect the performance under the
assumption that the Kafka configuration stays as is?
>
>> On 25 Aug 2016, at 18:43, Dana Powers <da...@gmail.com> wrote:
>>
>> python is generally restricted to a single CPU, and kafka-python will
>> max out a single CPU well before it maxes a network card. I would
>> recommend other tools for bulk transfers. Otherwise you may find that
>> partitioning your data set and running separate python processes for
>> each will increase the overall CPU available and therefore the
throughput.
>>
>> One day I will spend time improving the CPU performance of
>> kafka-python, but probably not in the near term.
>>
>> -Dana
>
> This e-mail and any files transmitted with it are confidential, may
contain sensitive information, and are intended solely for the use of the
individual or entity to whom they are addressed. If you have received this
e-mail in error, please notify the sender by reply e-mail immediately and
destroy all copies of the e-mail and any attachments.

RE: Kafka Producer performance - 400GB of transfer on single instance taking > 72 hours?

Posted by "Tauzell, Dave" <Da...@surescripts.com>.
I would write a python client that writes dummy data to kafka to measure how fast you can write to Kafka without MongoDB in the mix.  I've been doing load testing recently can with 3 brokers I can write 100MB/s (using Java clients).

-Dave

-----Original Message-----
From: Dominik Safaric [mailto:dominiksafaric@gmail.com]
Sent: Thursday, August 25, 2016 11:51 AM
To: users@kafka.apache.org
Subject: Re: Kafka Producer performance - 400GB of transfer on single instance taking > 72 hours?

Dear Dana,

> I would recommend
> other tools for bulk transfers.


What tools/languages would you rather recommend then using Python?

I could for sure accomplish the same by using the native Java Kafka Producer API, but should this really affect the performance under the assumption that the Kafka configuration stays as is?

> On 25 Aug 2016, at 18:43, Dana Powers <da...@gmail.com> wrote:
>
> python is generally restricted to a single CPU, and kafka-python will
> max out a single CPU well before it maxes a network card. I would
> recommend other tools for bulk transfers. Otherwise you may find that
> partitioning your data set and running separate python processes for
> each will increase the overall CPU available and therefore the throughput.
>
> One day I will spend time improving the CPU performance of
> kafka-python, but probably not in the near term.
>
> -Dana

This e-mail and any files transmitted with it are confidential, may contain sensitive information, and are intended solely for the use of the individual or entity to whom they are addressed. If you have received this e-mail in error, please notify the sender by reply e-mail immediately and destroy all copies of the e-mail and any attachments.

Re: Kafka Producer performance - 400GB of transfer on single instance taking > 72 hours?

Posted by Dominik Safaric <do...@gmail.com>.
Dear Dana,

> I would recommend
> other tools for bulk transfers.


What tools/languages would you rather recommend then using Python? 

I could for sure accomplish the same by using the native Java Kafka Producer API, but should this really affect the performance under the assumption that the Kafka configuration stays as is?  

> On 25 Aug 2016, at 18:43, Dana Powers <da...@gmail.com> wrote:
> 
> python is generally restricted to a single CPU, and kafka-python will max
> out a single CPU well before it maxes a network card. I would recommend
> other tools for bulk transfers. Otherwise you may find that partitioning
> your data set and running separate python processes for each will increase
> the overall CPU available and therefore the throughput.
> 
> One day I will spend time improving the CPU performance of kafka-python,
> but probably not in the near term.
> 
> -Dana