You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2016/05/23 16:55:12 UTC
[jira] [Commented] (KAFKA-3747) Close `RecordBatch.records` when
append to batch fails
[ https://issues.apache.org/jira/browse/KAFKA-3747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296641#comment-15296641 ]
ASF GitHub Bot commented on KAFKA-3747:
---------------------------------------
GitHub user ijuma opened a pull request:
https://github.com/apache/kafka/pull/1418
KAFKA-3747; Close `RecordBatch.records` when append to batch fails
With this change, `test_producer_throughput` with message_size=10000, compression_type=snappy and a snappy buffer size of 32k can be executed in a heap of 192m in a local environment (768m is needed without this change).
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ijuma/kafka kafka-3747-close-record-batch-when-append-fails
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/kafka/pull/1418.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #1418
----
commit 40b9076efc4f284bc0af2a83cb7754adc26ee362
Author: Ismael Juma <is...@juma.me.uk>
Date: 2016-05-23T16:26:15Z
Close `RecordBatch` if `tryAppend` fails
commit cb495aaea03778dda1a579a399ce0bf85c03ecfa
Author: Ismael Juma <is...@juma.me.uk>
Date: 2016-05-23T16:27:56Z
Use diamond operator in `RecordAccumulator` and prefer `Deque.isEmpty` over `size`
----
> Close `RecordBatch.records` when append to batch fails
> ------------------------------------------------------
>
> Key: KAFKA-3747
> URL: https://issues.apache.org/jira/browse/KAFKA-3747
> Project: Kafka
> Issue Type: Improvement
> Reporter: Ismael Juma
> Assignee: Ismael Juma
> Fix For: 0.10.0.1
>
>
> We should close the existing `RecordBatch.records` when we create a new `RecordBatch` for the `TopicPartition`.
> This would mean that we would only retain temporary resources like compression stream buffers for one `RecordBatch` per partition, which can have a significant impact when producers are dealing with slow brokers, see KAFKA-3704 for more details.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)