You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@pekko.apache.org by "jrudolph (via GitHub)" <gi...@apache.org> on 2023/04/19 07:43:36 UTC

[GitHub] [incubator-pekko-http] jrudolph commented on issue #139: PoolInterface buffer size regression issue in akka-http 10.2.10

jrudolph commented on issue #139:
URL: https://github.com/apache/incubator-pekko-http/issues/139#issuecomment-1514282668

   As described in the original PR, the buffer size used to be calculated as
   
   ```scala
   val targetBufferSize = settings.maxOpenRequests - settings.maxConnections
   ```
   
   The reasoning was that there can be one request going on in each of the connections, plus all the extra ones you want to have buffered by the configuration of `max-open-requests`, so that the total number of requests you would be able to submit to a pool would indeed be `max-open-requests`. However, it turned out that in some cases, the connections would not accept requests during ongoing connection errors, so that in fact you would not be able to submit `max-open-requests` concurrently to a pool.
   
   > Prior to the mentioned change, N = max-open-connections
   
   That was only true if `max-open-requests` was set to a lower value than `max-connections`.
   
   After the change, we went back to a previous behavior where we again allow `max-open-requests` in front of the pool, so that in the best case, indeed `N = max-open-requests + max-connections`.
   
   In any case, `max-open-requests` is not an exact setting because of the buffering involved and more like a lower bound of how many concurrent requests you are expected to submit to the pool without occurring an overflow.
   
   I wonder how increasing the buffer can be a problem in your case because we are handing out less errors than before? Can you show a test case that fails with the new behavior?
   
   I can see how the description of the setting as "The maximum number of open requests accepted into the pool" does not really fit my description above as a lower bound (instead of an upper bound). In any case, if you really want to enforce an upper limit you can always wrap the interface and count requests yourself if you want to err out fast. Though, it's not quite clear to me when you would really want to be so strict.
   
   After all, the whole client side pool interface architecture has its downsides (many for legacy reasons) and would benefit from a proper streaming interface. On the other hand, even (or maybe rather especially) with a streaming interface you also get all internal buffers so that we would not recommend relying on an exact buffer management in any case. Also, a streaming interface with backpressure has its own challenges in the more complex cases (e.g. head-of-line blocking with super-pools, accidentally unrestricted buffers by just adding streams to a common pool, needed buffers for internal retries etc.).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@pekko.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@pekko.apache.org
For additional commands, e-mail: notifications-help@pekko.apache.org