You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by "Anton Kalashnikov (Jira)" <ji...@apache.org> on 2021/09/07 10:05:00 UTC
[jira] [Created] (FLINK-24190) Handling large record with buffer
debloat
Anton Kalashnikov created FLINK-24190:
-----------------------------------------
Summary: Handling large record with buffer debloat
Key: FLINK-24190
URL: https://issues.apache.org/jira/browse/FLINK-24190
Project: Flink
Issue Type: Sub-task
Components: Runtime / Network
Affects Versions: 1.14.0
Reporter: Anton Kalashnikov
Fix For: 1.15.0
If the buffer size will be too small(less than record size) due to buffer debloat it can lead to performance degradation. It looks like it is better to keep the buffer size greater than the record size(or even greater than it). So it needs to check how bad it can be and fix it.
Implementation should be easy, we can choose the maximum value between desirableBufferSize and recordSize during requesting the new buffer(BufferWritingResultPartition#addToSubpartition)
--
This message was sent by Atlassian Jira
(v8.3.4#803005)