You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Andrew Jorgensen (JIRA)" <ji...@apache.org> on 2016/09/07 21:13:21 UTC

[jira] [Created] (KAFKA-4141) 2x increase in cp usage on new Producer API

Andrew Jorgensen created KAFKA-4141:
---------------------------------------

             Summary: 2x increase in cp usage on new Producer API
                 Key: KAFKA-4141
                 URL: https://issues.apache.org/jira/browse/KAFKA-4141
             Project: Kafka
          Issue Type: Bug
            Reporter: Andrew Jorgensen


We are seeing about a 2x increase in CPU usage for the new kafka producer compared to the 0.8.0.1 producer. We are currently using gzip compression.

We recently upgraded our kafka server and producer to 0.10.0.1 from 0.8.1.1 and noticed that the cpu usage for the new producer had increased pretty significantly compared to the old producer. This has caused us to need more resources to do the same amount of work we were doing before. I did some quick profiling and it looks like during sending half of the cpu cycles are sped in org.apache.kafka.common.record.Compressor.putRecord and the other half is in org.apache.kafka.common.record.Record.computeChecksum (both are around 5.8% of the cpu cycles for that method). I know its not apples to apples but the old producer did not seem to have this overhead or at least it was greatly reduced.

Is this a known performance degradation compared to the old producer? 

Here is the trace from the old producer:
!http://imgur.com/1xS34Dl.jpg!

New producer:
!http://imgur.com/0w0G5b1.jpg!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)