You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@apisix.apache.org by "shihaiyang-world (via GitHub)" <gi...@apache.org> on 2023/04/16 02:41:07 UTC

[GitHub] [apisix] shihaiyang-world opened a new issue, #9313: help request:

shihaiyang-world opened a new issue, #9313:
URL: https://github.com/apache/apisix/issues/9313

   ### Description
   
   I have a high-concurrency requirement that requires apisix to support 200,000+ concurrent requests. However, in reality, when the upstream of apisix is set to "discovery_type: kubernetes", apisix becomes a performance bottleneck. On the other hand, when the upstream of apisix is set to "nodes", the performance meets expectations.
   
   Here are the steps of the experiment:
   
   - Step 1: Prepare a target service - an nginx service, use wrk to perform load testing, and verify that it can support `28W QPS `; this serves as the **baseline**.  **( wrk -> nginx[15Pod] )**  ✅
   - Step 2: Perform load testing on 1 apisix Pod with `8C 8GB`. The result is `8W QPS` , and the CPU usage has already reached 8 cores. At this point, the bottleneck is on the single node of apisix.  **( wrk -> apisix[1 Pod] -> nginx[15Pod] )**  ✅
   - Step 3: Scale apisix to 4 pods and perform load testing. It is expected to reach `28W QPS` , but it did not meet expectations. The test result only reached `170,000 QPS`, and CPU usage did not reach its peak.  **(wrk -> apisix[4Pod][apisix discovery] -> nginx[15 Pod])**  ❌
   - Step 4: Modify the upstream of apisix to "**service IP**", that is, use Kubernetes service discovery. It is expected to reach 28W concurrent requests, and the result meets expectations with a test result of `28W QPS`.  **(wrk -> apisix[4Pod][kubernetes discovery] -> nginx[15Pod])**   ✅
   
   The screenshot below shows the test result:
   ![image](https://user-images.githubusercontent.com/4383037/232262199-841dd64d-60c3-4dc5-b6a6-ef02e3412048.png)
   
   **`Why did the performance not meet expectations when using "discovery_type: kubernetes" in the upstream?`**
   
   
   Here's your configuration file for steps 1-3:
   ```yaml
   uri: /test-nginx
   host: nginx-benchmark.com
   upstream:
     timeout:
       connect: 50
       send: 50
       read: 50
     type: roundrobin
     scheme: http
     discovery_type: kubernetes
     pass_host: pass
     name: nginx
     service_name: infrafe/devops-test-nginx:http
     keepalive_pool:
       idle_timeout: 60
       requests: 100000
       size: 320
   status: 1
   
   ```
   
   And here's your configuration file for step 4:
   ```yaml
   uri: /test-nginx
   host: nginx-benchmark.com
   upstream:
     nodes:
       - host: 10.31.241.215
         port: 80
         weight: 1
     timeout:
       connect: 6
       send: 6
       read: 6
     type: roundrobin
     scheme: http
     pass_host: pass
     keepalive_pool:
       idle_timeout: 60
       requests: 100000
       size: 320
   status: 1
   ```
   
   
   
   
   ### Environment
   
   - APISIX version (run `apisix version`): 2.15.0
   - Operating system (run `uname -a`): Linux apisix-rta-68c56cbd58-4dbxt 5.4.119-19-0009.11 #1 SMP Wed Oct 5 18:41:07 CST 2022 x86_64 Linux
   - OpenResty / Nginx version (run `openresty -V` or `nginx -V`): openresty/1.21.4.1
   - etcd version, if relevant (run `curl http://127.0.0.1:9090/v1/server_info`):  
   - APISIX Dashboard version, if relevant:  2.13
   - Plugin runner version, for issues related to plugin runners:  
   - LuaRocks version, for installation issues (run `luarocks --version`):
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [apisix] shihaiyang-world commented on issue #9313: help request: In high concurrency scenarios, why does configuring "discovery_type: kubernetes" in upstream fail to meet performance expectations?

Posted by "shihaiyang-world (via GitHub)" <gi...@apache.org>.
shihaiyang-world commented on issue #9313:
URL: https://github.com/apache/apisix/issues/9313#issuecomment-1510401945

   When using the least_conn load balancing algorithm, the result can reach up to `28W QPS`.
   so, the roundrobin load balancing algorithm of apisix is not suitable for ultra-high concurrency scenarios, and the least_conn load balancing algorithm should be used instead.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [apisix] shihaiyang-world closed issue #9313: help request: In high concurrency scenarios, why does configuring "discovery_type: kubernetes" in upstream fail to meet performance expectations?

Posted by "shihaiyang-world (via GitHub)" <gi...@apache.org>.
shihaiyang-world closed issue #9313: help request: In high concurrency scenarios, why does configuring "discovery_type: kubernetes" in upstream fail to meet performance expectations?
URL: https://github.com/apache/apisix/issues/9313


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [apisix] wolgod commented on issue #9313: help request: In high concurrency scenarios, why does configuring "discovery_type: kubernetes" in upstream fail to meet performance expectations?

Posted by "wolgod (via GitHub)" <gi...@apache.org>.
wolgod commented on issue #9313:
URL: https://github.com/apache/apisix/issues/9313#issuecomment-1579948987

   Yes, when using Kubernetes Service IP, Kubernetes also defaults to using RR (Round Robin) for load balancing. The difference is that API Gateway like Apisix has its own RR implementation, whereas using IPVS for Kubernetes Service IP load balancing uses the IPVS RR algorithm. It can be understood that the Apisix RR implementation may not be as accurate as IPVS RR algorithm.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org