You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Ajinkya Pathrudkar <aj...@gmail.com> on 2023/03/20 16:22:22 UTC

Out-of-memory errors after upgrading Flink from version 1.14 to 1.15 and Java 8 to Java 11

Ajinkya Pathrudkar <aj...@gmail.com>
10:53 AM (1 hour ago)
to user-info
I hope this email finds you well. I am writing to inform you of a recent
update we made to our Flink version, upgrading from 1.14 to 1.15, along
with a shift from Java 8 to Java 11. Since the update, we have encountered
out-of-memory (direct memory) exceptions.

The exception message suggests that increasing the task-off-heap memory
could resolve the issue, but we are uncertain if there have been any
changes to the memory model that may be contributing to this problem. The
error message we received is as follows:

java.lang.OutOfMemoryError: Can't allocate enough direct buffer for batch
shuffle read buffer pool (bytes allocated: 26017792, bytes still needed:
41091072). To avoid the exception, you need to do one of the following
adjustments: 1) If you have ever decreased
taskmanager.memory.framework.off-heap.size, you need to undo the decrement;
2) If you ever increased
taskmanager.memory.framework.off-heap.batch-shuffle.size, you should also
increase taskmanager.memory.framework.off-heap.size; 3) If neither the
above cases, it usually means some other parts of your application have
consumed too many direct memory and the value of
taskmanager.memory.task.off-heap.size should be increased.

We would appreciate any insights you may have regarding this issue, and any
suggestions on how to proceed would be greatly appreciated. Please let us
know if there are any additional details we can provide to assist you in
diagnosing the issue.

Thank you for your time and consideration.


Thanks,
Ajinkya

Re: Out-of-memory errors after upgrading Flink from version 1.14 to 1.15 and Java 8 to Java 11

Posted by Shammon FY <zj...@gmail.com>.
Hi Ajinkya

I think you can try to decrease the size of batch shuffle with config
`taskmanager.memory.framework.off-heap.batch-shuffle.size` if the data
volume of your job is small, the default value is `64M`. You can find more
information in doc [1]

[1]
https://github.com/apache/flink/blob/master/docs/content/docs/ops/batch/batch_shuffle.md#sort-shuffle

Best,
Shammon FY

On Tue, Mar 21, 2023 at 12:22 AM Ajinkya Pathrudkar <
ajinkya.pathrudkar11@gmail.com> wrote:

>
> Ajinkya Pathrudkar <aj...@gmail.com>
> 10:53 AM (1 hour ago)
> to user-info
> I hope this email finds you well. I am writing to inform you of a recent
> update we made to our Flink version, upgrading from 1.14 to 1.15, along
> with a shift from Java 8 to Java 11. Since the update, we have encountered
> out-of-memory (direct memory) exceptions.
>
> The exception message suggests that increasing the task-off-heap memory
> could resolve the issue, but we are uncertain if there have been any
> changes to the memory model that may be contributing to this problem. The
> error message we received is as follows:
>
> java.lang.OutOfMemoryError: Can't allocate enough direct buffer for batch
> shuffle read buffer pool (bytes allocated: 26017792, bytes still needed:
> 41091072). To avoid the exception, you need to do one of the following
> adjustments: 1) If you have ever decreased
> taskmanager.memory.framework.off-heap.size, you need to undo the decrement;
> 2) If you ever increased
> taskmanager.memory.framework.off-heap.batch-shuffle.size, you should also
> increase taskmanager.memory.framework.off-heap.size; 3) If neither the
> above cases, it usually means some other parts of your application have
> consumed too many direct memory and the value of
> taskmanager.memory.task.off-heap.size should be increased.
>
> We would appreciate any insights you may have regarding this issue, and
> any suggestions on how to proceed would be greatly appreciated. Please let
> us know if there are any additional details we can provide to assist you in
> diagnosing the issue.
>
> Thank you for your time and consideration.
>
>
> Thanks,
> Ajinkya
>