You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Piotr Nowojski (Jira)" <ji...@apache.org> on 2021/10/29 11:40:00 UTC

[jira] [Closed] (FLINK-24190) Handling large record with buffer debloat

     [ https://issues.apache.org/jira/browse/FLINK-24190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Piotr Nowojski closed FLINK-24190.
----------------------------------
    Resolution: Fixed

Merged to master as a3378133471^..a3378133471

> Handling large record with buffer debloat
> -----------------------------------------
>
>                 Key: FLINK-24190
>                 URL: https://issues.apache.org/jira/browse/FLINK-24190
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Runtime / Network
>    Affects Versions: 1.14.0
>            Reporter: Anton Kalashnikov
>            Assignee: Anton Kalashnikov
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 1.15.0
>
>
> If the buffer size will be too small(less than record size) due to buffer debloat it can lead to performance degradation. It looks like it is better to keep the buffer size greater than the record size(or even greater than it). 
> Implementation should be easy, we can choose the maximum value between desirableBufferSize and recordSize during requesting the new buffer(BufferWritingResultPartition#addToSubpartition)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)