You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by Stephan Ewen <se...@apache.org> on 2018/05/16 12:26:16 UTC

Re: Errors checkpointing to S3 for high-scale jobs

For posterity: Here is the Jira Issue that tracks this:
https://issues.apache.org/jira/browse/FLINK-9061

On Thu, Mar 22, 2018 at 11:46 PM, Jamie Grier <jg...@lyft.com> wrote:

> I think we need to modify the way we write checkpoints to S3 for high-scale
> jobs (those with many total tasks).  The issue is that we are writing all
> the checkpoint data under a common key prefix.  This is the worst case
> scenario for S3 performance since the key is used as a partition key.
>
> In the worst case checkpoints fail with a 500 status code coming back from
> S3 and an internal error type of TooBusyException.
>
> One possible solution would be to add a hook in the Flink filesystem code
> that allows me to "rewrite" paths.  For example say I have the checkpoint
> directory set to:
>
> s3://bucket/flink/checkpoints
>
> I would hook that and rewrite that path to:
>
> s3://bucket/[HASH]/flink/checkpoints, where HASH is the hash of the
> original path
>
> This would distribute the checkpoint write load around the S3 cluster
> evenly.
>
> For reference:
> https://aws.amazon.com/premiumsupport/knowledge-
> center/s3-bucket-performance-improve/
>
> Any other people hit this issue?  Any other ideas for solutions?  This is a
> pretty serious problem for people trying to checkpoint to S3.
>
> -Jamie
>