You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Guozhang Wang (JIRA)" <ji...@apache.org> on 2014/09/04 23:57:24 UTC

[jira] [Resolved] (KAFKA-703) A fetch request in Fetch Purgatory can double count the bytes from the same delayed produce request

     [ https://issues.apache.org/jira/browse/KAFKA-703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Guozhang Wang resolved KAFKA-703.
---------------------------------
    Resolution: Fixed

> A fetch request in Fetch Purgatory can double count the bytes from the same delayed produce request
> ---------------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-703
>                 URL: https://issues.apache.org/jira/browse/KAFKA-703
>             Project: Kafka
>          Issue Type: Bug
>          Components: purgatory
>    Affects Versions: 0.8.1
>            Reporter: Sriram Subramanian
>            Assignee: Sriram Subramanian
>            Priority: Blocker
>             Fix For: 0.8.2
>
>
> When a producer request is handled, the fetch purgatory is checked to ensure any fetch requests are satisfied. When the produce request is satisfied we do the check again and if the same fetch request was still in the fetch purgatory it would end up double counting the bytes received.
> Possible Solutions
> 1. In the delayed produce request case, do the check only after the produce request is satisfied. This could potentially delay the fetch request from being satisfied.
> 2. Remove dependency of fetch request on produce request and just look at the last logical log offset (which should mostly be cached). This would need the replica.fetch.min.bytes to be number of messages rather than bytes. This also helps KAFKA-671 in that we would no longer need to pass the ProduceRequest object to the producer purgatory and hence not have to consume any memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)