You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Yun Tang (JIRA)" <ji...@apache.org> on 2019/04/12 08:44:00 UTC

[jira] [Comment Edited] (FLINK-11107) [state] Avoid memory stateBackend to create arbitrary folders under HA path when no checkpoint path configured

    [ https://issues.apache.org/jira/browse/FLINK-11107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16815949#comment-16815949 ] 

Yun Tang edited comment on FLINK-11107 at 4/12/19 8:43 AM:
-----------------------------------------------------------

[~ykt836] This is the issue which would created a lot of useless folders under HA folder and lead to no new jobs could be launched if the HA folder meet the limitation exception.


was (Author: yunta):
[~ykt836] This is the issue which would created a lot of useless folders under HA folder and lead to no new jobs could not be launched if the HA folder meet the limitation exception.

> [state] Avoid memory stateBackend to create arbitrary folders under HA path when no checkpoint path configured
> --------------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-11107
>                 URL: https://issues.apache.org/jira/browse/FLINK-11107
>             Project: Flink
>          Issue Type: Bug
>          Components: Runtime / Checkpointing
>    Affects Versions: 1.6.2, 1.7.0
>            Reporter: Yun Tang
>            Assignee: Yun Tang
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 1.7.3, 1.6.5
>
>
> Currently, memory state-backend would create a folder named with random UUID under HA directory if no checkpoint path ever configured. (the code logic locates within {{StateBackendLoader#fromApplicationOrConfigOrDefault}}) However, the default memory state-backend would not only be created on JM side, but also on each task manager's side, which means many folders with random UUID would be created under HA directory. It would result in exception like:
> {noformat}
> The directory item limit of /tmp/flink/ha is exceeded: limit=1048576 items=1048576{noformat}
>  If this happens, no new jobs could be submitted only if we clean up those directories manually.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)