You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by "westonpace (via GitHub)" <gi...@apache.org> on 2023/04/17 18:15:08 UTC

[GitHub] [arrow] westonpace commented on issue #34892: [C++] Mechanism for throttling remote filesystems to avoid rate limiting

westonpace commented on issue #34892:
URL: https://github.com/apache/arrow/issues/34892#issuecomment-1511862783

   It could be based on https://github.com/apache/arrow-rs/blob/master/object_store/src/throttle.rs
   
   We have something kind of similar already internally with https://github.com/apache/arrow/blob/main/cpp/src/arrow/io/slow.h and `SlowFileSystem` (in filesystem.h)
   
   However, for S3, it seems we already have logic for retries (I was not aware of this until recently).  I think a first question is to understand why this doesn't work.  By default an S3 filesystem should (I think) retry every 200ms up to 6 seconds.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org