You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Steven Zhen Wu (JIRA)" <ji...@apache.org> on 2018/03/24 01:03:00 UTC

[jira] [Comment Edited] (FLINK-9061) S3 checkpoint data not partitioned well -- causes errors and poor performance

    [ https://issues.apache.org/jira/browse/FLINK-9061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16412326#comment-16412326 ] 

Steven Zhen Wu edited comment on FLINK-9061 at 3/24/18 1:02 AM:
----------------------------------------------------------------

Jamie,

yes, we run into the same issue at Netflix. We did exactly like what you said. Here are our config looks like.

state.backend.fs.checkpointdir: s3://bucket/__ENTROPY_KEY__/flink/checkpoints
 state.backend.fs.checkpointdir.injectEntropy.enabled: true
 state.backend.fs.checkpointdir.injectEntropy.key: __ENTROPY_KEY__

we modified state backend code to support it. Without the random chars in the path, there is no way for S3 to partition the bucket to support high request rate. I don't see other way around it. There is obviously down side with such random chars in s3 path. now you can't do prefix listing anymore.

 

Steven


was (Author: stevenz3wu):
Jamie,

yes, we run into the same issue at Netflix. We did exactly like what you said. Here are our config looks like.

state.backend.fs.checkpointdir: s3://bucket/__ENTROPY_KEY__/flink/checkpoints
state.backend.fs.checkpointdir.injectEntropy.enabled: true
state.backend.fs.checkpointdir.injectEntropy.key: __ENTROPY_KEY__

we modified state backend code to support it. Without the random chars in the path, there is no way for S3 to partition the bucket to support high request rate. I don't see other way around it. There is obviously down side with such random chars in s3 path. now you can't do prefix listing anymore.

 

Steven

 

 

 

 

 

> S3 checkpoint data not partitioned well -- causes errors and poor performance
> -----------------------------------------------------------------------------
>
>                 Key: FLINK-9061
>                 URL: https://issues.apache.org/jira/browse/FLINK-9061
>             Project: Flink
>          Issue Type: Bug
>          Components: FileSystem, State Backends, Checkpointing
>    Affects Versions: 1.4.2
>            Reporter: Jamie Grier
>            Priority: Critical
>
> I think we need to modify the way we write checkpoints to S3 for high-scale jobs (those with many total tasks).  The issue is that we are writing all the checkpoint data under a common key prefix.  This is the worst case scenario for S3 performance since the key is used as a partition key.
>  
> In the worst case checkpoints fail with a 500 status code coming back from S3 and an internal error type of TooBusyException.
>  
> One possible solution would be to add a hook in the Flink filesystem code that allows me to "rewrite" paths.  For example say I have the checkpoint directory set to:
>  
> s3://bucket/flink/checkpoints
>  
> I would hook that and rewrite that path to:
>  
> s3://bucket/[HASH]/flink/checkpoints, where HASH is the hash of the original path
>  
> This would distribute the checkpoint write load around the S3 cluster evenly.
>  
> For reference: https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-performance-improve/
>  
> Any other people hit this issue?  Any other ideas for solutions?  This is a pretty serious problem for people trying to checkpoint to S3.
>  
> -Jamie
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)