You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2018/07/04 09:12:00 UTC
[jira] [Commented] (FLINK-9693) Possible memory leak in jobmanager
retaining archived checkpoints
[ https://issues.apache.org/jira/browse/FLINK-9693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532501#comment-16532501 ]
ASF GitHub Bot commented on FLINK-9693:
---------------------------------------
GitHub user tillrohrmann opened a pull request:
https://github.com/apache/flink/pull/6251
[FLINK-9693] Set Execution#taskRestore to null after deployment
## What is the purpose of the change
Setting the assigned Execution#taskRestore to null after the deployment allows the
JobManagerTaskRestore instance to be garbage collected. Furthermore, it won't be
archived along with the Execution in the ExecutionVertex in case of a restart. This
is especially important when setting state.backend.fs.memory-threshold to larger
values because every state below this threshold will be stored in the meta state files
and, thus, also the JobManagerTaskRestore instances.
## Verifying this change
- Added `ExecutionTest#testTaskRestoreStateIsNulledAfterDeployment`
## Does this pull request potentially affect one of the following parts:
- Dependencies (does it add or upgrade a dependency): (no)
- The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no)
- The serializers: (no)
- The runtime per-record code paths (performance sensitive): (no)
- Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes)
- The S3 file system connector: (no)
## Documentation
- Does this pull request introduce a new feature? (no)
- If yes, how is the feature documented? (not applicable)
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tillrohrmann/flink fixMemoryLeakInJobManager
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/flink/pull/6251.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #6251
----
----
> Possible memory leak in jobmanager retaining archived checkpoints
> -----------------------------------------------------------------
>
> Key: FLINK-9693
> URL: https://issues.apache.org/jira/browse/FLINK-9693
> Project: Flink
> Issue Type: Bug
> Components: JobManager, State Backends, Checkpointing
> Affects Versions: 1.5.0, 1.6.0
> Environment: !image.png!!image (1).png!
> Reporter: Steven Zhen Wu
> Assignee: Till Rohrmann
> Priority: Major
> Labels: pull-request-available
> Attachments: 41K_ExecutionVertex_objs_retained_9GB.png, ExecutionVertexZoomIn.png
>
>
> First, some context about the job
> * Flink 1.4.1
> * stand-alone deployment mode
> * embarrassingly parallel: all operators are chained together
> * parallelism is over 1,000
> * stateless except for Kafka source operators. checkpoint size is 8.4 MB.
> * set "state.backend.fs.memory-threshold" so that only jobmanager writes to S3 to checkpoint
> * internal checkpoint with 10 checkpoints retained in history
>
> Summary of the observations
> * 41,567 ExecutionVertex objects retained 9+ GB of memory
> * Expanded in one ExecutionVertex. it seems to storing the kafka offsets for source operator
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)