You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Steven Zhen Wu (JIRA)" <ji...@apache.org> on 2018/04/02 17:51:00 UTC

[jira] [Comment Edited] (FLINK-9061) S3 checkpoint data not partitioned well -- causes errors and poor performance

    [ https://issues.apache.org/jira/browse/FLINK-9061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16422843#comment-16422843 ] 

Steven Zhen Wu edited comment on FLINK-9061 at 4/2/18 5:50 PM:
---------------------------------------------------------------

Usually 4-char random prefix can go a long way. Even 2-char random prefix can be sufficient unless super high workload. Again, as Steve Loughran, nothing official. just based on experience and speculation. But I think we should give user the control the number of characters for the entropy part.

[~jgrier]  flink checkpoint path is like "s3://bucket/checkpoint-path-prefix/chk-121/random-UUID". So yes, reversing key name can work, because last part is a random UUID. But I think we should give user the control on the part of checkpoint path to introduce entropy. e.g. I may want to maintain the top level prefixes (checkpoints and savepoints)
{code:java}
s3://bucket/checkpoints/<entropy>/rest-of-checkpoint-path
s3://bucket/savepoints/<entropy>/rest-of-savepointpoint-path{code}
 


was (Author: stevenz3wu):
Usually 4-char random prefix can go a long way. Even 2-char random prefix can be sufficient unless super high workload. Again, as Steve Loughran, nothing official. just based on experience and speculation. But I think we should give user the control the number of characters for the entropy part.

[~jgrier]  flink checkpoint path is like "s3://bucket/checkpoint-path-prefix/chk-121/random-UUID". So yes, reversing key name can work. But I think we should give user the control on the part of checkpoint path to introduce entropy. e.g. I may want to maintain the top level prefixes (checkpoints and savepoints)
{code:java}
s3://bucket/checkpoints/<entropy>/rest-of-checkpoint-path
s3://bucket/savepoints/<entropy>/rest-of-savepointpoint-path{code}
 

> S3 checkpoint data not partitioned well -- causes errors and poor performance
> -----------------------------------------------------------------------------
>
>                 Key: FLINK-9061
>                 URL: https://issues.apache.org/jira/browse/FLINK-9061
>             Project: Flink
>          Issue Type: Bug
>          Components: FileSystem, State Backends, Checkpointing
>    Affects Versions: 1.4.2
>            Reporter: Jamie Grier
>            Priority: Critical
>
> I think we need to modify the way we write checkpoints to S3 for high-scale jobs (those with many total tasks).  The issue is that we are writing all the checkpoint data under a common key prefix.  This is the worst case scenario for S3 performance since the key is used as a partition key.
>  
> In the worst case checkpoints fail with a 500 status code coming back from S3 and an internal error type of TooBusyException.
>  
> One possible solution would be to add a hook in the Flink filesystem code that allows me to "rewrite" paths.  For example say I have the checkpoint directory set to:
>  
> s3://bucket/flink/checkpoints
>  
> I would hook that and rewrite that path to:
>  
> s3://bucket/[HASH]/flink/checkpoints, where HASH is the hash of the original path
>  
> This would distribute the checkpoint write load around the S3 cluster evenly.
>  
> For reference: https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-performance-improve/
>  
> Any other people hit this issue?  Any other ideas for solutions?  This is a pretty serious problem for people trying to checkpoint to S3.
>  
> -Jamie
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)