You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by LakeShen <sh...@gmail.com> on 2020/03/31 09:34:01 UTC

Question about the flink 1.6 memory config

Hi community,

Now I am optimizing the flink 1.6 task memory configuration. I see the
source code, at first, the flink task config the cut-off memory, cut-off
memory = Math.max(600,containerized.heap-cutoff-ratio  * TaskManager
Memory), containerized.heap-cutoff-ratio default value is 0.25. For
example, if TaskManager Memory is 4G, cut-off memory is 1 G.

However, I set the taskmanager's gc.log, I find the  metaspace only used 60
MB. I personally feel that the memory configuration of cut-off is a little
too large. Can this cut-off memory configuration be reduced, like making
the containerized.heap-cutoff-ratio be 0.15.
Is there any problem for this config?

I am looking forward to your reply.

Best wishes,
LakeShen

Re: Question about the flink 1.6 memory config

Posted by Xintong Song <to...@gmail.com>.
The container cut-off accounts for not only metaspace, but also native
memory footprint such as thread stack, code cache, compressed class space.
If you run streaming jobs with rocksdb state backend, it also accounts for
the rocksdb memory usage.

The consequence of less cut-off depends on your environment and workloads.
For standalone clusters, the cut-off will not take any effect. For
containerized environments, depending on Yarn/Mesos configurations your
container may or may not get killed due to exceeding the container memory.

Thank you~

Xintong Song



On Tue, Mar 31, 2020 at 5:34 PM LakeShen <sh...@gmail.com> wrote:

> Hi community,
>
> Now I am optimizing the flink 1.6 task memory configuration. I see the
> source code, at first, the flink task config the cut-off memory, cut-off
> memory = Math.max(600,containerized.heap-cutoff-ratio  * TaskManager
> Memory), containerized.heap-cutoff-ratio default value is 0.25. For
> example, if TaskManager Memory is 4G, cut-off memory is 1 G.
>
> However, I set the taskmanager's gc.log, I find the  metaspace only used
> 60 MB. I personally feel that the memory configuration of cut-off is a
> little too large. Can this cut-off memory configuration be reduced, like
> making the containerized.heap-cutoff-ratio be 0.15.
> Is there any problem for this config?
>
> I am looking forward to your reply.
>
> Best wishes,
> LakeShen
>

Re: Question about the flink 1.6 memory config

Posted by Xintong Song <to...@gmail.com>.
The container cut-off accounts for not only metaspace, but also native
memory footprint such as thread stack, code cache, compressed class space.
If you run streaming jobs with rocksdb state backend, it also accounts for
the rocksdb memory usage.

The consequence of less cut-off depends on your environment and workloads.
For standalone clusters, the cut-off will not take any effect. For
containerized environments, depending on Yarn/Mesos configurations your
container may or may not get killed due to exceeding the container memory.

Thank you~

Xintong Song



On Tue, Mar 31, 2020 at 5:34 PM LakeShen <sh...@gmail.com> wrote:

> Hi community,
>
> Now I am optimizing the flink 1.6 task memory configuration. I see the
> source code, at first, the flink task config the cut-off memory, cut-off
> memory = Math.max(600,containerized.heap-cutoff-ratio  * TaskManager
> Memory), containerized.heap-cutoff-ratio default value is 0.25. For
> example, if TaskManager Memory is 4G, cut-off memory is 1 G.
>
> However, I set the taskmanager's gc.log, I find the  metaspace only used
> 60 MB. I personally feel that the memory configuration of cut-off is a
> little too large. Can this cut-off memory configuration be reduced, like
> making the containerized.heap-cutoff-ratio be 0.15.
> Is there any problem for this config?
>
> I am looking forward to your reply.
>
> Best wishes,
> LakeShen
>

Re: Question about the flink 1.6 memory config

Posted by Xintong Song <to...@gmail.com>.
The container cut-off accounts for not only metaspace, but also native
memory footprint such as thread stack, code cache, compressed class space.
If you run streaming jobs with rocksdb state backend, it also accounts for
the rocksdb memory usage.

The consequence of less cut-off depends on your environment and workloads.
For standalone clusters, the cut-off will not take any effect. For
containerized environments, depending on Yarn/Mesos configurations your
container may or may not get killed due to exceeding the container memory.

Thank you~

Xintong Song



On Tue, Mar 31, 2020 at 5:34 PM LakeShen <sh...@gmail.com> wrote:

> Hi community,
>
> Now I am optimizing the flink 1.6 task memory configuration. I see the
> source code, at first, the flink task config the cut-off memory, cut-off
> memory = Math.max(600,containerized.heap-cutoff-ratio  * TaskManager
> Memory), containerized.heap-cutoff-ratio default value is 0.25. For
> example, if TaskManager Memory is 4G, cut-off memory is 1 G.
>
> However, I set the taskmanager's gc.log, I find the  metaspace only used
> 60 MB. I personally feel that the memory configuration of cut-off is a
> little too large. Can this cut-off memory configuration be reduced, like
> making the containerized.heap-cutoff-ratio be 0.15.
> Is there any problem for this config?
>
> I am looking forward to your reply.
>
> Best wishes,
> LakeShen
>