You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Jay Kreps (JIRA)" <ji...@apache.org> on 2015/02/07 08:04:35 UTC

[jira] [Commented] (KAFKA-1865) Investigate adding a flush() call to new producer API

    [ https://issues.apache.org/jira/browse/KAFKA-1865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14310590#comment-14310590 ] 

Jay Kreps commented on KAFKA-1865:
----------------------------------

A key aspect of this that isn't obvious is that flush() has to disable linger.

That is say I have linger.ms=3000
If I do 
{code}
for(int i = 0; i < 1000; i++)
   producer.send(new ProducerRecord("topic", Integer.toString(i));
producer.flush();
{code}

The flush call isn't as simple as just blocking on the record accumulator draining since that would mean waiting an extra 3 seconds during which of course no other records will be written. So flush should trigger immediate send just as close and memory exhaustion do in the record accumulator.

> Investigate adding a flush() call to new producer API
> -----------------------------------------------------
>
>                 Key: KAFKA-1865
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1865
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Jay Kreps
>
> The postconditions of this would be that any record enqueued prior to flush() would have completed being sent (either successfully or not).
> An open question is whether you can continue sending new records while this call is executing (on other threads).
> We should only do this if it doesn't add inefficiencies for people who don't use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)