You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@pekko.apache.org by "jphelp32 (via GitHub)" <gi...@apache.org> on 2023/04/18 20:24:01 UTC

[GitHub] [incubator-pekko-http] jphelp32 opened a new issue, #139: PoolInterface buffer size regression issue in akka-http 10.2.10

jphelp32 opened a new issue, #139:
URL: https://github.com/apache/incubator-pekko-http/issues/139

   ### Discussed in https://github.com/apache/incubator-pekko-http/discussions/138
   
   <div type='discussions-op-text'>
   
   <sup>Originally posted by **jphelp32** April 18, 2023</sup>
   @jrudolph
   can you comment on the likelihood of pekko-http addressing this regression that was introduced in akka-http 10.2.10: https://github.com/akka/akka-http/pull/4132#issuecomment-1238618118
   We are currently locked at akka-http 10.2.9 due to this issue. As we're planning our migration to pekko, I'm trying to plan whether to expect a fix in pekko-http, or if we should plan to adjust our configs according to the newer/broken(?) behavior that was introduced in 10.2.10.
   thanks
   </div>
   
   _Original akka-http issue_: https://github.com/akka/akka-http/issues/4156
   _Summary_:
   > 10.2.10 seems to have changed the semantics of the host connection pool max-open-requests configuration. Consider that N requests to a single host are accepted by Http().singleRequest() before a BufferOverflowException occurs. Prior to the mentioned change, N = max-open-connections. Now after this change, it appears that N = max-open-requests + max-connections. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@pekko.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@pekko.apache.org
For additional commands, e-mail: notifications-help@pekko.apache.org


[GitHub] [incubator-pekko-http] pjfanning commented on issue #139: PoolInterface buffer size regression issue in akka-http 10.2.10

Posted by "pjfanning (via GitHub)" <gi...@apache.org>.
pjfanning commented on issue #139:
URL: https://github.com/apache/incubator-pekko-http/issues/139#issuecomment-1515170297

   Akka issue has not to a change yet. So far, the new behaviour is seeming to be bedded in.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@pekko.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@pekko.apache.org
For additional commands, e-mail: notifications-help@pekko.apache.org


[GitHub] [incubator-pekko-http] jphelp32 commented on issue #139: PoolInterface buffer size regression issue in akka-http 10.2.10

Posted by "jphelp32 (via GitHub)" <gi...@apache.org>.
jphelp32 commented on issue #139:
URL: https://github.com/apache/incubator-pekko-http/issues/139#issuecomment-1515162660

   Thanks @jrudolph. Our use case is a reverse proxy api gateway service. We currently rely upon max-open-requests and max-connections as a service protection mechanism to fail fast when a target backend service is in duress but does not itself have the capability to fail fast. I’m not sure that we have a strong opinion on whether the configuration should be an upper-bound or a lower-bound. We just need to understand which is the expected behavior, and be able to verify that behavior with unit tests. To my knowledge, I don’t think our existing unit tests (based on 10.2.9 behavior) ever encountered the scenario that prompted the change. If it ever occurred in production, we didn’t notice.
   So would you say that this is working as designed and will not be changed going forward? Perhaps only a documentation update would be forthcoming to clarify the behavior to be expected?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@pekko.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@pekko.apache.org
For additional commands, e-mail: notifications-help@pekko.apache.org


[GitHub] [incubator-pekko-http] jrudolph commented on issue #139: PoolInterface buffer size regression issue in akka-http 10.2.10

Posted by "jrudolph (via GitHub)" <gi...@apache.org>.
jrudolph commented on issue #139:
URL: https://github.com/apache/incubator-pekko-http/issues/139#issuecomment-1514282668

   As described in the original PR, the buffer size used to be calculated as
   
   ```scala
   val targetBufferSize = settings.maxOpenRequests - settings.maxConnections
   ```
   
   The reasoning was that there can be one request going on in each of the connections, plus all the extra ones you want to have buffered by the configuration of `max-open-requests`, so that the total number of requests you would be able to submit to a pool would indeed be `max-open-requests`. However, it turned out that in some cases, the connections would not accept requests during ongoing connection errors, so that in fact you would not be able to submit `max-open-requests` concurrently to a pool.
   
   > Prior to the mentioned change, N = max-open-connections
   
   That was only true if `max-open-requests` was set to a lower value than `max-connections`.
   
   After the change, we went back to a previous behavior where we again allow `max-open-requests` in front of the pool, so that in the best case, indeed `N = max-open-requests + max-connections`.
   
   In any case, `max-open-requests` is not an exact setting because of the buffering involved and more like a lower bound of how many concurrent requests you are expected to submit to the pool without occurring an overflow.
   
   I wonder how increasing the buffer can be a problem in your case because we are handing out less errors than before? Can you show a test case that fails with the new behavior?
   
   I can see how the description of the setting as "The maximum number of open requests accepted into the pool" does not really fit my description above as a lower bound (instead of an upper bound). In any case, if you really want to enforce an upper limit you can always wrap the interface and count requests yourself if you want to err out fast. Though, it's not quite clear to me when you would really want to be so strict.
   
   After all, the whole client side pool interface architecture has its downsides (many for legacy reasons) and would benefit from a proper streaming interface. On the other hand, even (or maybe rather especially) with a streaming interface you also get all internal buffers so that we would not recommend relying on an exact buffer management in any case. Also, a streaming interface with backpressure has its own challenges in the more complex cases (e.g. head-of-line blocking with super-pools, accidentally unrestricted buffers by just adding streams to a common pool, needed buffers for internal retries etc.).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@pekko.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscribe@pekko.apache.org
For additional commands, e-mail: notifications-help@pekko.apache.org