You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Till Rohrmann (Jira)" <ji...@apache.org> on 2019/09/27 09:33:00 UTC

[jira] [Commented] (FLINK-5931) Make Flink highly available even if defaultFS is unavailable

    [ https://issues.apache.org/jira/browse/FLINK-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16939255#comment-16939255 ] 

Till Rohrmann commented on FLINK-5931:
--------------------------------------

I think this issue has been abandoned [~yanghua]. If picking it up, then I think we need a proper design how this should work.

> Make Flink highly available even if defaultFS is unavailable
> ------------------------------------------------------------
>
>                 Key: FLINK-5931
>                 URL: https://issues.apache.org/jira/browse/FLINK-5931
>             Project: Flink
>          Issue Type: New Feature
>          Components: Runtime / Coordination
>            Reporter: Haohui Mai
>            Assignee: Haohui Mai
>            Priority: Major
>
> In order to use Flink in mission-critical environments, Flink must be available even if the {{defaultFS}} is unavailable.
> We have deployed HDFS in HA mode in our production environment. In our experience we have experienced performance degradations and downtime when the HDFS cluster is being expanded or under maintenances. Under this case it is desirable to deploy jobs through alternative filesystem (e.g., S3).
> This jira is to track the improvements to Flink to enable Flink to continue to operate even {{defaultFS}} is unavailable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)