You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@apisix.apache.org by "zouchengzhuo (via GitHub)" <gi...@apache.org> on 2023/03/31 10:00:41 UTC

[GitHub] [apisix] zouchengzhuo opened a new issue, #9214: bug: 在流式插件中,客户端并发连接,或者在插件中并发创建 ngx.socket.tcp 时,有一定概率得到相同内存地址的 tcp 对象

zouchengzhuo opened a new issue, #9214:
URL: https://github.com/apache/apisix/issues/9214

   ### Current Behavior
   
   <img width="887" alt="image" src="https://user-images.githubusercontent.com/6971870/229087450-57369d63-d8d7-408b-9e56-edf3a9cb71cf.png">
   
   客户端建立连接时,读取连接的 tcp 对象,同时创建3个子协程,连接3个后端
   
   <img width="846" alt="image" src="https://user-images.githubusercontent.com/6971870/229087918-998d9470-8516-4322-a753-8959cd7bba4e.png">
   
   与后端的连接建立后,尝试在 while(true) 中  receive 数据。
   
   通过日志观察,有一定概率出现很多地址相同的 tcp 对象,无论是 ngx.req.socket 拿到的,还是 ngx.tcp.socket 拿到的,都有可能出现重复。  
   
   <img width="1275" alt="image" src="https://user-images.githubusercontent.com/6971870/229088641-831832ba-4e88-4305-a042-3c8cb1e595bd.png">
   
   当出现这种情况时,在插件中 while(true) 调用 receive 从 ngx.req.socket 中读取数据时,会一直阻塞,既不能成功读取数据,也不会超时,即使读缓冲区中已经有了很多数据堆积。  
   
   <img width="758" alt="image" src="https://user-images.githubusercontent.com/6971870/229089016-65c616d0-0f09-4495-b18d-6fad0b5bfcac.png">
   
   
   
   
   
   ### Expected Behavior
   
   ngx.req.socket 不再阻塞,需要能成功读取到数据,或者超时。
   
   ### Error Logs
   
   _No response_
   
   ### Steps to Reproduce
   
   通过一个流式插件可以复现,插件代码:
   ```lua
   local plugin_name = "czzou-tcp-stream"
   local ngx_log = ngx.log
   local ngx_DEBUG = ngx.DEBUG
   local ngx_ERROR = ngx.ERR
   
   local schema = {
       type = "object"
   }
   
   local _M = {
       version = 0.1,
       priority = 1005,
       name = plugin_name,
       schema = schema,
   }
   
   function _M.check_schema(schema_type, schema)
       -- perform validation
       return true
   end
   
   function _M.upstream_handler(remote_port, server_ip, server_port)
       local upstream_sock = ngx.socket.tcp()
       local ok, err = upstream_sock:connect(server_ip,server_port)
       if not ok then
           ngx_log(ngx_ERROR, remote_port .. string.format(" failed to receive data from client: %s", err))
           return
       else
           ngx_log(ngx_ERROR, string.format("connect upstream success ip %s port %d ", server_ip, server_port))
       end
       ngx_log(ngx_ERROR, remote_port ..  string.format(" start upstream_handler: %s ", tostring(upstream_sock)))
       while true do
           -- 每次尝试读取 3k~10k 数据,然后发给 socket
           upstream_sock:settimeout(30000)
           local package_len = 3000 + math.floor(7000 * math.random())
           -- local package_len = 1024
           ngx_log(ngx_ERROR, remote_port ..  " start to receive data from upstream \n")
           local data, err, partial = upstream_sock:receive(package_len)
           if err then
               ngx_log(ngx_ERROR, remote_port .. string.format(" failed to receive data from upstream: %s", err))
               if string.find(err, "timeout") then
                   goto continue
               end
               return
           end
           ngx_log(ngx_ERROR, remote_port ..  " received " .. #data .. " bytes of data from upstream \n")
           :: continue ::
       end
   end
   
   function _M.preread(conf, ctx)
   
       local socket = ngx.req.socket(true)
       local upstream_co_list = {}
       local upstream_sock_list = {}
       local upstream_num = 3
       local remote_port = ctx.var["remote_port"]
       ngx_log(ngx_ERROR, remote_port ..  string.format(" start upstream_handler (main): %s ", tostring(socket)))
       -- 连接 upstream_num 个后端,有数据来了随机发给一个后端,然后将后端的数据会写给 socket
       local server_ip = "127.0.0.1"
       local server_port = 10000
       for i = 1, upstream_num do
           local co = ngx.thread.spawn(_M.upstream_handler, remote_port, server_ip, server_port)
           -- 本地服务监听 10000~1000n
           server_port = server_port + 1
           upstream_co_list[i] = co
           :: continue ::
       end
       
   
       while true do
           socket:settimeout(30000)
           local package_len = 3000 + math.floor(7000 * math.random())
           -- local package_len = 1024
           ngx_log(ngx_ERROR, remote_port ..  string.format(" try to receive %d bytes from client", package_len))
           local data, err, partial = socket:receive(package_len)
           if err then
               ngx_log(ngx_ERROR, remote_port .. string.format(" failed to receive data from client: %s", err))
               if string.find(err, "timeout") then
                   goto continue
               end
               -- 关闭后端协程并退出
               for i= 1, upstream_num do
                   ngx.thread.kill(upstream_co_list[i])
               end
               -- 如果是 close 返回1,否则返回 503
               if err == "closed" then
                   return 1
               else
                   return 503
               end
           end
           ngx_log(ngx_ERROR, remote_port .. " received " .. #data .. " bytes of data from client\n")
           :: continue ::
       end
   
       socket:close()
   end
   
   return _M
   ```
   
   ### Environment
   
   - APISIX version (run `apisix version`): 3.2.0
   - Operating system (run `uname -a`): centos7
   - OpenResty / Nginx version (run `openresty -V` or `nginx -V`): openresty/1.21.4.1
   - etcd version, if relevant (run `curl http://127.0.0.1:9090/v1/server_info`): 3.4.0
   - APISIX Dashboard version, if relevant:
   - Plugin runner version, for issues related to plugin runners:
   - LuaRocks version, for installation issues (run `luarocks --version`):
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [apisix] zouchengzhuo commented on issue #9214: bug: In a streaming plug-in, when the client connects concurrently, or when the ngx.socket.tcp is created concurrently in the plugin, there is a certain probability that the tcp read buffer accumulates data, but the receive method of ngx.req.socket block continuously.

Posted by "zouchengzhuo (via GitHub)" <gi...@apache.org>.
zouchengzhuo commented on issue #9214:
URL: https://github.com/apache/apisix/issues/9214#issuecomment-1527381835

   > sorry. I not found this usage at APISIX project.
   > 
   > ```
   > ~/w/apisix *master> ack 'socket.tcp' apisix/stream/plugins/
   > ~/w/apisix *master> ack 'ngx.req.socket' apisix/stream/
   > apisix/stream/plugins/mqtt-proxy.lua
   > 131:    local sock = ngx.req.socket()
   > ```
   > 
   > and
   > 
   > > When the raw argument is true, it is required that no pending data from any previous [ngx.say](https://github.com/openresty/lua-nginx-module#ngxsay), [ngx.print](https://github.com/openresty/lua-nginx-module#ngxprint), or [ngx.send_headers](https://github.com/openresty/lua-nginx-module#ngxsend_headers) calls exists. So if you have these downstream output calls previously, you should call [ngx.flush(true)](https://github.com/openresty/lua-nginx-module#ngxflush) before calling ngx.req.socket(true) to ensure that there is no pending output data. If the request body has not been read yet, then this "raw socket" can also be used to read the request body.
   
   There is the same problem when using ngx.req.socket(),the receive queue of tcp will be blocked when using sock:receive. 
   
   when using  sock:receiveany, the receive queue of tcp will not blocked,  but in this case, i have to manage the buffer myself.
   
   when using sock:receive, there may be some problems with the coroutine scheduling, causing the main coroutine to fail to resume.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [I] bug: In a streaming plug-in, when the client connects concurrently, or when the ngx.socket.tcp is created concurrently in the plugin, there is a certain probability that the tcp read buffer accumulates data, but the receive method of ngx.req.socket block continuously. [apisix]

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] closed issue #9214: bug: In a streaming plug-in, when the client connects concurrently, or when the ngx.socket.tcp is created concurrently in the plugin, there is a certain probability that the tcp read buffer accumulates data, but the receive method of ngx.req.socket block continuously.
URL: https://github.com/apache/apisix/issues/9214


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [I] bug: In a streaming plug-in, when the client connects concurrently, or when the ngx.socket.tcp is created concurrently in the plugin, there is a certain probability that the tcp read buffer accumulates data, but the receive method of ngx.req.socket block continuously. [apisix]

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on issue #9214:
URL: https://github.com/apache/apisix/issues/9214#issuecomment-2080438861

   This issue has been closed due to lack of activity. If you think that is incorrect, or the issue requires additional review, you can revive the issue at any time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [apisix] Sn0rt commented on issue #9214: bug: In a streaming plug-in, when the client connects concurrently, or when the ngx.socket.tcp is created concurrently in the plugin, there is a certain probability that the tcp read buffer accumulates data, but the receive method of ngx.req.socket block continuously.

Posted by "Sn0rt (via GitHub)" <gi...@apache.org>.
Sn0rt commented on issue #9214:
URL: https://github.com/apache/apisix/issues/9214#issuecomment-1517361432

   sorry. I not found this usage at APISIX project.
   
   ```
   ~/w/apisix *master> ack 'socket.tcp' apisix/stream/plugins/
   ~/w/apisix *master> ack 'ngx.req.socket' apisix/stream/
   apisix/stream/plugins/mqtt-proxy.lua
   131:    local sock = ngx.req.socket()
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Re: [I] bug: In a streaming plug-in, when the client connects concurrently, or when the ngx.socket.tcp is created concurrently in the plugin, there is a certain probability that the tcp read buffer accumulates data, but the receive method of ngx.req.socket block continuously. [apisix]

Posted by "github-actions[bot] (via GitHub)" <gi...@apache.org>.
github-actions[bot] commented on issue #9214:
URL: https://github.com/apache/apisix/issues/9214#issuecomment-2053597909

   This issue has been marked as stale due to 350 days of inactivity. It will be closed in 2 weeks if no further activity occurs. If this issue is still relevant, please simply write any comment. Even if closed, you can still revive the issue at any time or discuss it on the dev@apisix.apache.org list. Thank you for your contributions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org