You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2020/03/03 00:56:00 UTC

[jira] [Commented] (KAFKA-8995) Add new metric on broker to illustrate produce request compression percentage

    [ https://issues.apache.org/jira/browse/KAFKA-8995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17049809#comment-17049809 ] 

ASF GitHub Bot commented on KAFKA-8995:
---------------------------------------

guozhangwang commented on pull request #8208: KAFKA-8995: delete all topics before recreating
URL: https://github.com/apache/kafka/pull/8208
 
 
   I think the root cause of KAFKA-8893, KAFKA-8894, KAFKA-8895 and KSTREAMS-3779 are the same: some intermediate topics are not deleted in the `setup` logic before recreating the user topics, which could cause the waitForDeletion (that check exact match of all existing topics) to fail, and also could cause more records to be returned because of the intermediate topics that are not deleted from the previous test case.
   
   Also inspired by https://github.com/apache/kafka/pull/5418/files I used a longer timeout (120 secs) for deleting all topics.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


> Add new metric on broker to illustrate produce request compression percentage
> -----------------------------------------------------------------------------
>
>                 Key: KAFKA-8995
>                 URL: https://issues.apache.org/jira/browse/KAFKA-8995
>             Project: Kafka
>          Issue Type: Improvement
>          Components: core
>            Reporter: Jun Rao
>            Assignee: Guozhang Wang
>            Priority: Major
>              Labels: needs-kip
>
> When `compression.type` is set to `producer`, we would accept produce request and use its encoded compression to apply to the logs; otherwise we would recompress the message according to the configured compression type before appending. There are pros and cons to recompress the data: you pay more CPU to recompress, but you reduce the storage cost. 
> In practice, if the incoming produce requests are not compressed, then compressing before appending maybe more beneficial, otherwise just keep them as if `producer` config maybe better. Adding a metric to expose the incoming requests' compression in percentage would be a helpful data point to help operators selecting their compression policy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)