You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Dan Circelli <da...@arcticwolf.com> on 2017/09/28 15:47:58 UTC

Job Manager minimum memory hard coded to 768

In our usage of Flink, our Yarn Job Manager never goes above ~48 MB of heap utilization. In order to maximize the heap available to the Task Managers I thought we could shrink our Job Manager heap setting down from the 1024MB we were using to something tiny like 128MB. However, doing so results in the runtime error:

java.lang.IllegalArgumentException: The JobManager memory (64) is below the minimum required memory amount of 768 MB
at org.apache.flink.yarn.AbstractYarnClusterDescriptor.setJobManagerMemory(AbstractYarnClusterDescriptor.java:187)
…

Looking into it: this value isn’t controlled by the settings in yarn-site.xml but is actually hardcoded in Flink code base to 768 MB. (see AbstractYarnDescriptor.java where MIN_JM_MEMORY = 768.)


Why is this hardcoded?
Why not let value be set via the Yarn Site Configuration xml?
Why such a high minimum?


Thanks,
Dan

Re: Job Manager minimum memory hard coded to 768

Posted by Haohui Mai <ri...@gmail.com>.
We have observed the same issue in our production cluster. Filed FLINK-7743
for the fix.

~Haohui

On Fri, Sep 29, 2017 at 1:18 AM Till Rohrmann <tr...@apache.org> wrote:

> Hi Dan,
>
> I think Aljoscha is right and the 768 MB minimum JM memory is more of a
> legacy artifact which was never properly refactored. If I remember
> correctly, then we had problems when starting Flink in a container with a
> lower memory limit. Therefore this limit was introduced. But I'm actually
> not sure whether this is still valid and should definitely be verified
> again.
>
> Cheers,
> Till
>
> On Thu, Sep 28, 2017 at 10:52 PM, Aljoscha Krettek <al...@apache.org>
> wrote:
>
>> I believe this could be from a time when there was not yet the setting "containerized.heap-cutoff-min"
>> since this part of the code is quite old.
>>
>> I think we could be able to remove that restriction but I'm not sure so
>> I'm cc'ing Till who knows those parts best.
>>
>> @Till, what do you think?
>>
>> On 28. Sep 2017, at 17:47, Dan Circelli <da...@arcticwolf.com>
>> wrote:
>>
>> In our usage of Flink, our Yarn Job Manager never goes above ~48 MB of
>> heap utilization. In order to maximize the heap available to the Task
>> Managers I thought we could shrink our Job Manager heap setting down from
>> the 1024MB we were using to something tiny like 128MB. However, doing so
>> results in the runtime error:
>>
>> java.lang.IllegalArgumentException: The JobManager memory (64) is below
>> the minimum required memory amount of 768 MB
>> at
>> org.apache.flink.yarn.AbstractYarnClusterDescriptor.setJobManagerMemory(AbstractYarnClusterDescriptor.java:187)
>> …
>>
>> Looking into it: this value isn’t controlled by the settings in
>> yarn-site.xml but is actually hardcoded in Flink code base to 768 MB. (see
>> AbstractYarnDescriptor.java where MIN_JM_MEMORY = 768.)
>>
>>
>> Why is this hardcoded?
>> Why not let value be set via the Yarn Site Configuration xml?
>> Why such a high minimum?
>>
>>
>> Thanks,
>> Dan
>>
>>
>>
>

Re: Job Manager minimum memory hard coded to 768

Posted by Till Rohrmann <tr...@apache.org>.
Hi Dan,

I think Aljoscha is right and the 768 MB minimum JM memory is more of a
legacy artifact which was never properly refactored. If I remember
correctly, then we had problems when starting Flink in a container with a
lower memory limit. Therefore this limit was introduced. But I'm actually
not sure whether this is still valid and should definitely be verified
again.

Cheers,
Till

On Thu, Sep 28, 2017 at 10:52 PM, Aljoscha Krettek <al...@apache.org>
wrote:

> I believe this could be from a time when there was not yet the setting "
> containerized.heap-cutoff-min" since this part of the code is quite old.
>
> I think we could be able to remove that restriction but I'm not sure so
> I'm cc'ing Till who knows those parts best.
>
> @Till, what do you think?
>
> On 28. Sep 2017, at 17:47, Dan Circelli <da...@arcticwolf.com>
> wrote:
>
> In our usage of Flink, our Yarn Job Manager never goes above ~48 MB of
> heap utilization. In order to maximize the heap available to the Task
> Managers I thought we could shrink our Job Manager heap setting down from
> the 1024MB we were using to something tiny like 128MB. However, doing so
> results in the runtime error:
>
> java.lang.IllegalArgumentException: The JobManager memory (64) is below
> the minimum required memory amount of 768 MB
> at org.apache.flink.yarn.AbstractYarnClusterDescriptor.
> setJobManagerMemory(AbstractYarnClusterDescriptor.java:187)
> …
>
> Looking into it: this value isn’t controlled by the settings in
> yarn-site.xml but is actually hardcoded in Flink code base to 768 MB. (see
> AbstractYarnDescriptor.java where MIN_JM_MEMORY = 768.)
>
>
> Why is this hardcoded?
> Why not let value be set via the Yarn Site Configuration xml?
> Why such a high minimum?
>
>
> Thanks,
> Dan
>
>
>

Re: Job Manager minimum memory hard coded to 768

Posted by Aljoscha Krettek <al...@apache.org>.
I believe this could be from a time when there was not yet the setting "containerized.heap-cutoff-min" since this part of the code is quite old.

I think we could be able to remove that restriction but I'm not sure so I'm cc'ing Till who knows those parts best.

@Till, what do you think?

> On 28. Sep 2017, at 17:47, Dan Circelli <da...@arcticwolf.com> wrote:
> 
> In our usage of Flink, our Yarn Job Manager never goes above ~48 MB of heap utilization. In order to maximize the heap available to the Task Managers I thought we could shrink our Job Manager heap setting down from the 1024MB we were using to something tiny like 128MB. However, doing so results in the runtime error:
>  
> java.lang.IllegalArgumentException: The JobManager memory (64) is below the minimum required memory amount of 768 MB
> at org.apache.flink.yarn.AbstractYarnClusterDescriptor.setJobManagerMemory(AbstractYarnClusterDescriptor.java:187)
> …
>  
> Looking into it: this value isn’t controlled by the settings in yarn-site.xml but is actually hardcoded in Flink code base to 768 MB. (see AbstractYarnDescriptor.java where MIN_JM_MEMORY = 768.)
>  
>  
> Why is this hardcoded? 
> Why not let value be set via the Yarn Site Configuration xml?
> Why such a high minimum?
>  
>  
> Thanks,
> Dan