You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@james.apache.org by GitBox <gi...@apache.org> on 2021/09/22 04:39:25 UTC

[GitHub] [james-project] Arsnael commented on pull request #665: JAMES-3150 Better handle massive deletions as part of BloomFilterGCAl…

Arsnael commented on pull request #665:
URL: https://github.com/apache/james-project/pull/665#issuecomment-924577608


   > With AWS S3, they limit the request rate (throw an exception if too much request in time).
   > Some other cloud vendors may be the same.
   > So, I have a bit of thinking here
   
   Not sure about how it works exactly but I would expect the S3 java client being used having a retry mechanism for such issue? If not we would have a lot of blobs disappearing I guess not only there?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@james.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@james.apache.org
For additional commands, e-mail: notifications-help@james.apache.org