You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@apisix.apache.org by "bingxueai123456 (via GitHub)" <gi...@apache.org> on 2023/04/29 07:49:11 UTC

[GitHub] [apisix] bingxueai123456 opened a new issue, #9398: Apisix always actively closes TCP connections, leading to too many TIME_WAIT states. The throughput has been unable to increase.

bingxueai123456 opened a new issue, #9398:
URL: https://github.com/apache/apisix/issues/9398

   ### Description
   
   I have a load balancer pointing to the APISIX service. After each HTTP request, APISIX always actively closes the connection, which results in a new TCP connection being required for each request and increasing TIME_WAIT states.(I captured the HTTP packets and found that it was indeed APISIX that actively closed the connection, not the load balancer)
   <img width="525" alt="image" src="https://user-images.githubusercontent.com/25632537/235291363-5803457c-e699-4e97-8ed5-f5b7492c1908.png">
   
   
   ### Environment
   
   - APISIX version (3.2.0):
   - Operating system (Linux api-003 5.10.134-13.1.al8.x86_64 #1 SMP Mon Feb 6 14:54:50 CST 2023 x86_64 x86_64 x86_64 GNU/Linux):
   - OpenResty / Nginx version (openresty/1.21.4.1):
   - etcd version, if relevant (3.5.0):
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [I] Apisix always actively closes TCP connections, leading to too many TIME_WAIT states. The throughput has been unable to increase. [apisix]

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] closed issue #9398: Apisix always actively closes TCP connections, leading to too many TIME_WAIT states. The throughput has been unable to increase.
URL: https://github.com/apache/apisix/issues/9398


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [I] Apisix always actively closes TCP connections, leading to too many TIME_WAIT states. The throughput has been unable to increase. [apisix]

Posted by "shreemaan-abhishek (via GitHub)" <gi...@apache.org>.
shreemaan-abhishek commented on issue #9398:
URL: https://github.com/apache/apisix/issues/9398#issuecomment-1904250198

   @bingxueai123456 any updates?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [I] Apisix always actively closes TCP connections, leading to too many TIME_WAIT states. The throughput has been unable to increase. [apisix]

Posted by "bingxueai123456 (via GitHub)" <gi...@apache.org>.
bingxueai123456 commented on issue #9398:
URL: https://github.com/apache/apisix/issues/9398#issuecomment-1869995043

   > > Because they are using the same poolname
   > 
   > how can you confirm this? @alptugay
   
   # This is a read-only file, do not try to modify it.
   master_process on;
   
   worker_processes auto;
   
   # main configuration snippet starts
   
   # main configuration snippet ends
   
   error_log logs/error.log warn;
   pid logs/nginx.pid;
   
   worker_rlimit_nofile 20480;
   
   events {
       accept_mutex off;
       worker_connections 10620;
   }
   
   worker_rlimit_core  16G;
   
   worker_shutdown_timeout 240s;
   
   env APISIX_PROFILE;
   env PATH; # for searching external plugin runner's binary
   
   # reserved environment variables for configuration
   env APISIX_DEPLOYMENT_ETCD_HOST;
   
   
   thread_pool grpc-client-nginx-module threads=1;
   
   lua {
   }
   
   
   
   
   http {
       # put extra_lua_path in front of the builtin path
       # so user can override the source code
       lua_package_path  "$prefix/deps/share/lua/5.1/?.lua;$prefix/deps/share/lua/5.1/?/init.lua;/usr/local/apisix/?.lua;/usr/local/apisix/?/init.lua;;/usr/local/apisix/?.lua;./?.lua;/usr/local/openresty/luajit/share/luajit-2.1.0-beta3/?.lua;/usr/local/share/lua/5.1/?.lua;/usr/local/share/lua/5.1/?/init.lua;/usr/local/openresty/luajit/share/lua/5.1/?.lua;/usr/local/openresty/luajit/share/lua/5.1/?/init.lua;;";
       lua_package_cpath "$prefix/deps/lib64/lua/5.1/?.so;$prefix/deps/lib/lua/5.1/?.so;;./?.so;/usr/local/lib/lua/5.1/?.so;/usr/local/openresty/luajit/lib/lua/5.1/?.so;/usr/local/lib/lua/5.1/loadall.so;";
   
       lua_max_pending_timers 16384;
       lua_max_running_timers 4096;
   
       lua_shared_dict internal-status 10m;
       lua_shared_dict upstream-healthcheck 10m;
       lua_shared_dict worker-events 10m;
       lua_shared_dict lrucache-lock 10m;
       lua_shared_dict balancer-ewma 10m;
       lua_shared_dict balancer-ewma-locks 10m;
       lua_shared_dict balancer-ewma-last-touched-at 10m;
       lua_shared_dict etcd-cluster-health-check 10m; # etcd health check
   
       # for discovery shared dict
   
   
       lua_shared_dict plugin-limit-conn 10m;
   
       lua_shared_dict plugin-limit-req 10m;
   
       lua_shared_dict plugin-limit-count 10m;
       lua_shared_dict plugin-limit-count-redis-cluster-slot-lock 1m;
       lua_shared_dict plugin-limit-count-reset-header 10m;
   
       lua_shared_dict prometheus-metrics 10m;
   
   
       lua_shared_dict plugin-api-breaker 10m;
   
       # for openid-connect and authz-keycloak plugin
       lua_shared_dict discovery 1m; # cache for discovery metadata documents
   
       # for openid-connect plugin
       lua_shared_dict jwks 1m; # cache for JWKs
       lua_shared_dict introspection 10m; # cache for JWT verification results
   
       lua_shared_dict cas_sessions 10m;
   
       # for authz-keycloak
       lua_shared_dict access-tokens 1m; # cache for service account access tokens
   
       lua_shared_dict ext-plugin 1m; # cache for ext-plugin
   
   
       # for custom shared dict
   
   
       lua_ssl_verify_depth 5;
       ssl_session_timeout 86400;
   
       underscores_in_headers on;
   
       lua_socket_log_errors off;
   
       resolver 100.100.2.136 100.100.2.138 ipv6=on;
       resolver_timeout 5;
   
       lua_http10_buffering off;
   
       lua_regex_match_limit 100000;
       lua_regex_cache_max_entries 8192;
   
       log_format main escape=json '$remote_addr - $remote_user [$time_iso8601] ($upstream_addr) "$host" "$request" $status $request_time - $upstream_response_time $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for" $request_body';
       uninitialized_variable_warn off;
   
       access_log logs/access.log main buffer=16384 flush=3;
       open_file_cache  max=1000 inactive=60;
       client_max_body_size 0;
       keepalive_timeout 60s;
       client_header_timeout 60s;
       client_body_timeout 60s;
       send_timeout 10s;
       variables_hash_max_size 2048;
   
       server_tokens off;
   
       include mime.types;
       charset utf-8;
   
       real_ip_header X-Real-IP;
   
       real_ip_recursive off;
   
       set_real_ip_from 127.0.0.1;
       set_real_ip_from unix:;
   
   
       # http configuration snippet starts
       
       # http configuration snippet ends
   
       upstream apisix_backend {
           server 0.0.0.1;
   
           keepalive 320;
           keepalive_requests 1000;
           keepalive_timeout 60s;
           # we put the static configuration above so that we can override it in the Lua code
   
           balancer_by_lua_block {
               apisix.http_balancer_phase()
           }
       }
   
   
       apisix_delay_client_max_body_check on;
       apisix_mirror_on_demand on;
   
   
       init_by_lua_block {
           require "resty.core"
           apisix = require("apisix")
   
           local dns_resolver = { "100.100.2.136", "100.100.2.138", }
           local args = {
               dns_resolver = dns_resolver,
           }
           apisix.http_init(args)
   
           -- set apisix_lua_home into constans module
           -- it may be used by plugins to determine the work path of apisix
           local constants = require("apisix.constants")
           constants.apisix_lua_home = "/usr/local/apisix"
       }
   
       init_worker_by_lua_block {
           apisix.http_init_worker()
       }
   
       exit_worker_by_lua_block {
           apisix.http_exit_worker()
       }
   
       server {
           listen 127.0.0.1:9090;
   
           access_log off;
   
           location / {
               content_by_lua_block {
                   apisix.http_control()
               }
           }
       }
   
       server {
               listen 192.168.7.5:9091 enable_process=privileged_agent;
   
           access_log off;
   
           location / {
               content_by_lua_block {
                   local prometheus = require("apisix.plugins.prometheus.exporter")
                   prometheus.export_metrics()
               }
           }
   
           location = /apisix/nginx_status {
               allow 127.0.0.0/24;
               deny all;
               stub_status;
           }
       }
   
       server {
           listen 0.0.0.0:9180;
           log_not_found off;
   
           # admin configuration snippet starts
           
           # admin configuration snippet ends
   
           set $upstream_scheme             'http';
           set $upstream_host               $http_host;
           set $upstream_uri                '';
   
           location /apisix/admin {
                   allow all;
   
               content_by_lua_block {
                   apisix.http_admin()
               }
           }
       }
   
       upstream apisix_conf_backend {
       server 0.0.0.0:80;
       balancer_by_lua_block {
           local conf_server = require("apisix.conf_server")
           conf_server.balancer()
       }
   }
   
   
   server {
       listen unix:/usr/local/apisix/conf/config_listen.sock;
   
       access_log off;
   
       set $upstream_host '';
   
       access_by_lua_block {
           local conf_server = require("apisix.conf_server")
           conf_server.access()
       }
   
       location / {
           proxy_pass http://apisix_conf_backend;
   
           proxy_http_version 1.1;
           proxy_set_header Connection "";
   
           proxy_set_header Host $upstream_host;
           proxy_next_upstream error timeout non_idempotent
               http_500 http_502 http_503 http_504;
       }
   
       log_by_lua_block {
           local conf_server = require("apisix.conf_server")
           conf_server.log()
       }
   }
   
   
   
       # for proxy cache
       proxy_cache_path /tmp/disk_cache_one levels=1:2 keys_zone=disk_cache_one:50m inactive=1d max_size=1G use_temp_path=off;
       lua_shared_dict memory_cache 50m;
   
       map $upstream_cache_zone $upstream_cache_zone_info {
           disk_cache_one /tmp/disk_cache_one,1:2;
       }
   
       server {
           listen 0.0.0.0:7080 default_server reuseport;
           listen [::]:7080 default_server reuseport;
           listen 0.0.0.0:7443 ssl default_server http2 reuseport;
           listen [::]:7443 ssl default_server http2 reuseport;
   
           server_name _;
   
           ssl_certificate      cert/ssl_PLACE_HOLDER.crt;
           ssl_certificate_key  cert/ssl_PLACE_HOLDER.key;
           ssl_session_cache    shared:SSL:20m;
           ssl_session_timeout 10m;
   
           ssl_protocols TLSv1.2 TLSv1.3;
           ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
           ssl_prefer_server_ciphers on;
           ssl_session_tickets off;
   
   
           # http server configuration snippet starts
           
           # http server configuration snippet ends
   
           location = /apisix/nginx_status {
               allow 127.0.0.0/24;
               deny all;
               access_log off;
               stub_status;
           }
   
           ssl_certificate_by_lua_block {
               apisix.http_ssl_phase()
           }
   
           proxy_ssl_name $upstream_host;
           proxy_ssl_server_name on;
   
           location / {
               set $upstream_mirror_uri         '';
               set $upstream_upgrade            '';
               set $upstream_connection         '';
   
               set $upstream_scheme             'http';
               set $upstream_host               $http_host;
               set $upstream_uri                '';
               set $ctx_ref                     '';
   
   
               # http server location configuration snippet starts
               
               # http server location configuration snippet ends
   
   
               access_by_lua_block {
                   apisix.http_access_phase()
               }
   
               proxy_http_version 1.1;
               proxy_set_header   Host              $upstream_host;
               proxy_set_header   Upgrade           $upstream_upgrade;
               proxy_set_header   Connection        $upstream_connection;
               proxy_set_header   X-Real-IP         $remote_addr;
               proxy_pass_header  Date;
   
               ### the following x-forwarded-* headers is to send to upstream server
   
               set $var_x_forwarded_for        $remote_addr;
               set $var_x_forwarded_proto      $scheme;
               set $var_x_forwarded_host       $host;
               set $var_x_forwarded_port       $server_port;
   
               if ($http_x_forwarded_for != "") {
                   set $var_x_forwarded_for "${http_x_forwarded_for}, ${realip_remote_addr}";
               }
   
               proxy_set_header   X-Forwarded-For      $var_x_forwarded_for;
               proxy_set_header   X-Forwarded-Proto    $var_x_forwarded_proto;
               proxy_set_header   X-Forwarded-Host     $var_x_forwarded_host;
               proxy_set_header   X-Forwarded-Port     $var_x_forwarded_port;
   
               ###  the following configuration is to cache response content from upstream server
   
               set $upstream_cache_zone            off;
               set $upstream_cache_key             '';
               set $upstream_cache_bypass          '';
               set $upstream_no_cache              '';
   
               proxy_cache                         $upstream_cache_zone;
               proxy_cache_valid                   any 10s;
               proxy_cache_min_uses                1;
               proxy_cache_methods                 GET HEAD POST;
               proxy_cache_lock_timeout            5s;
               proxy_cache_use_stale               off;
               proxy_cache_key                     $upstream_cache_key;
               proxy_no_cache                      $upstream_no_cache;
               proxy_cache_bypass                  $upstream_cache_bypass;
   
   
               proxy_pass      $upstream_scheme://apisix_backend$upstream_uri;
   
               mirror          /proxy_mirror;
   
               header_filter_by_lua_block {
                   apisix.http_header_filter_phase()
               }
   
               body_filter_by_lua_block {
                   apisix.http_body_filter_phase()
               }
   
               log_by_lua_block {
                   apisix.http_log_phase()
               }
           }
   
           location @grpc_pass {
   
               access_by_lua_block {
                   apisix.grpc_access_phase()
               }
   
               # For servers which obey the standard, when `:authority` is missing,
               # `host` will be used instead. When used with apisix-base, we can do
               # better by setting `:authority` directly
               grpc_set_header   ":authority" $upstream_host;
               grpc_set_header   Content-Type application/grpc;
               grpc_socket_keepalive on;
               grpc_pass         $upstream_scheme://apisix_backend;
   
               header_filter_by_lua_block {
                   apisix.http_header_filter_phase()
               }
   
               body_filter_by_lua_block {
                   apisix.http_body_filter_phase()
               }
   
               log_by_lua_block {
                   apisix.http_log_phase()
               }
           }
   
   
           location = /proxy_mirror {
               internal;
   
   
   
               proxy_connect_timeout 60s;
               proxy_read_timeout 60s;
               proxy_send_timeout 60s;
               proxy_http_version 1.1;
               proxy_set_header Host $upstream_host;
               proxy_pass $upstream_mirror_uri;
           }
       }
   
       # http end configuration snippet starts
       
       # http end configuration snippet ends
   }


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [I] Apisix always actively closes TCP connections, leading to too many TIME_WAIT states. The throughput has been unable to increase. [apisix]

Posted by "shreemaan-abhishek (via GitHub)" <gi...@apache.org>.
shreemaan-abhishek commented on issue #9398:
URL: https://github.com/apache/apisix/issues/9398#issuecomment-1870796464

   does increasing the keepalive timeout help?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [I] Apisix always actively closes TCP connections, leading to too many TIME_WAIT states. The throughput has been unable to increase. [apisix]

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on issue #9398:
URL: https://github.com/apache/apisix/issues/9398#issuecomment-2022363436

   This issue has been closed due to lack of activity. If you think that is incorrect, or the issue requires additional review, you can revive the issue at any time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [I] Apisix always actively closes TCP connections, leading to too many TIME_WAIT states. The throughput has been unable to increase. [apisix]

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on issue #9398:
URL: https://github.com/apache/apisix/issues/9398#issuecomment-2016433222

   Due to lack of the reporter's response this issue has been labeled with "no response". It will be close in 3 days if no further activity occurs. If this issue is still relevant, please simply write any comment. Even if closed, you can still revive the issue at any time or discuss it on the dev@apisix.apache.org list. Thank you for your contributions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [apisix] alptugay commented on issue #9398: Apisix always actively closes TCP connections, leading to too many TIME_WAIT states. The throughput has been unable to increase.

Posted by "alptugay (via GitHub)" <gi...@apache.org>.
alptugay commented on issue #9398:
URL: https://github.com/apache/apisix/issues/9398#issuecomment-1595351310

   We are having the same issue


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [I] Apisix always actively closes TCP connections, leading to too many TIME_WAIT states. The throughput has been unable to increase. [apisix]

Posted by "shreemaan-abhishek (via GitHub)" <gi...@apache.org>.
shreemaan-abhishek commented on issue #9398:
URL: https://github.com/apache/apisix/issues/9398#issuecomment-1869982774

   > Because they are using the same poolname
   
   how can you confirm this? @alptugay 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [I] Apisix always actively closes TCP connections, leading to too many TIME_WAIT states. The throughput has been unable to increase. [apisix]

Posted by "bingxueai123456 (via GitHub)" <gi...@apache.org>.
bingxueai123456 commented on issue #9398:
URL: https://github.com/apache/apisix/issues/9398#issuecomment-1869995677

   > We are having the same issue. @bingxueai123456 are you using healthcheck? According to my observation healthcheck implementation breaks keepalive. Because they are using the same poolname
   
   
   
   > We are having the same issue. @bingxueai123456 are you using healthcheck? According to my observation healthcheck implementation breaks keepalive. Because they are using the same poolname
   
   I have configured health checks on the load balancer using TCP packets.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [I] Apisix always actively closes TCP connections, leading to too many TIME_WAIT states. The throughput has been unable to increase. [apisix]

Posted by "theweakgod (via GitHub)" <gi...@apache.org>.
theweakgod commented on issue #9398:
URL: https://github.com/apache/apisix/issues/9398#issuecomment-1871689272

   @bingxueai123456 Have you set proxy_http_version 1.1; proxy_set_header Connection ""; in your Load Balance;
   This sounds a bit like a dumb proxy


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [I] Apisix always actively closes TCP connections, leading to too many TIME_WAIT states. The throughput has been unable to increase. [apisix]

Posted by "shreemaan-abhishek (via GitHub)" <gi...@apache.org>.
shreemaan-abhishek commented on issue #9398:
URL: https://github.com/apache/apisix/issues/9398#issuecomment-1869982476

   could you please share your nginx.conf? I am specifically interested in finding out if you modified the `keepalive_timeout`. Also, does extending the timeout duration fix the issue?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org