You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Maximilian Michels (JIRA)" <ji...@apache.org> on 2015/09/03 12:01:46 UTC

[jira] [Commented] (FLINK-2235) Local Flink cluster allocates too much memory

    [ https://issues.apache.org/jira/browse/FLINK-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14728792#comment-14728792 ] 

Maximilian Michels commented on FLINK-2235:
-------------------------------------------

Pushed also to the release 0.9 branch.

> Local Flink cluster allocates too much memory
> ---------------------------------------------
>
>                 Key: FLINK-2235
>                 URL: https://issues.apache.org/jira/browse/FLINK-2235
>             Project: Flink
>          Issue Type: Bug
>          Components: Local Runtime, TaskManager
>    Affects Versions: 0.9
>         Environment: Oracle JDK: 1.6.0_65-b14-462
> Eclipse
>            Reporter: Maximilian Michels
>            Assignee: Maximilian Michels
>            Priority: Minor
>             Fix For: 0.10, 0.9.2
>
>
> When executing a Flink job locally, the task manager gets initialized with an insane amount of memory. After a quick look in the code it seems that the call to {{EnvironmentInformation.getSizeOfFreeHeapMemoryWithDefrag()}} returns a wrong estimate of the heap memory size.
> Moreover, the same user switched to Oracle JDK 1.8 and that made the error disappear. So I'm guessing this is some Java 1.6 quirk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)