You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@apisix.apache.org by GitBox <gi...@apache.org> on 2022/10/26 10:33:59 UTC

[GitHub] [apisix] kingluo opened a new issue, #8181: bug: enable ext-plugin-post-resp plugin may lost last downstream response chunk

kingluo opened a new issue, #8181:
URL: https://github.com/apache/apisix/issues/8181

   ### Current Behavior
   
   enable ext-plugin-post-resp plugin, while the below conditions satisfied, the downstream response would lost last chunk:
   * ext-plugin-post-resp does not change upstream response body, so that upstream response would be forwarded to downstream
   * upstream responses multiple chunks, the first chunk plus headers bigger than `postpone_output` (1460 by deafult), the second or the last chunk is smaller than `postpone_output`
   
   Issue report source (slack link):
   
   https://the-asf.slack.com/archives/CUC5MN17A/p1666604718850519
   
   
   ## Reason
   
   Issue trigger function: 
   
   https://github.com/apache/apisix/blob/e2d0f0b16bb157412684a38b129d12acb4628d39/apisix/plugins/ext-plugin-post-resp.lua#L109
   
   Assume we got two chunks from upstream.
   The first chunk plus headers are bigger than `postpone_output`, so `ngx.print` would send the data via socket system call.
   If it's successful, then it would place two free bufs in `ctx->free_bufs`, in fact, these two bufs points to the same buf. See below for low level detail.
   
   Then `ngx.flush` would consume one free buf, and try to flush, altough nothing needed to flush at the moment.
   After `ngx.flush` finishes, the `ctx->free_bufs` has three free bufs, all point to the same buf.
   
   The second chunk consumes one free buf, and because it is smaller than `postpone_output`, so it is kept by nginx, so not return to `ctx->free_bufs`.
   
   Then the problem comes in, `ngx.flush` consumes one free buf, but this buf points the one used by the last chunk, and got cleared, set `buf->flush=1`, which results in emtpy content but with chunk prefix and suffix:
   
   ```
   d\r\n\r\n
   ```
   Here `d\r\n` is chunk prefix, means 13 bytes, and tail  `\r\n` is chunk suffix. But nothing between! 
   
   As a result, `curl` would reports `transfer closed with outstanding read data remaining`, because it makes `curl` confused.
   
   ### low-level explanation
   
   `ngx_http_lua_ngx_echo`  and `ngx_http_lua_ngx_flush` would finally call `ngx_chain_update_chains` twice, and makes the same buf returns to `ctx->free_bufs` twice!
   
   Both `ngx_http_lua_ngx_echo` and `ngx_http_lua_ngx_flush` calls `ngx_http_lua_output_filter`:
   
   ```c
   static ngx_int_t
   ngx_http_lua_output_filter(ngx_http_request_t *r, ngx_chain_t *in)
   {
   ...
       rc = ngx_http_output_filter(r, in);
   
       if (rc == NGX_ERROR) {
           return NGX_ERROR;
       }
   
       ctx = ngx_http_get_module_ctx(r, ngx_http_lua_module);
   
       if (ctx == NULL) {
           return rc;
       }
   
       ngx_chain_update_chains(r->pool,
                               &ctx->free_bufs, &ctx->busy_bufs, &in,
                               (ngx_buf_tag_t) &ngx_http_lua_module);
   ...
       return rc;
   }
   ```
   
   ```c
   static ngx_int_t
   ngx_http_lua_body_filter(ngx_http_request_t *r, ngx_chain_t *in)
   {
   ...
       ngx_chain_update_chains(r->pool,
                               &ctx->free_bufs, &ctx->filter_busy_bufs, &out,
                               (ngx_buf_tag_t) &ngx_http_lua_module);
   ...
       return rc;
   }
   
   ```
   
   ### use gdb to verify it
   
   `ctx_freebufs.gdb`
   
   ```gdb
   b ngx_http_lua_output.c:241
   command 1
       set $v = ctx->free_bufs
       while ($v != 0)
           p $v
           p $v->buf
           set $v=$v->next
       end
       continue
   end
   
   
   b ngx_http_lua_output.c:554
   command 2
       set $v = ctx->free_bufs
       while ($v != 0)
           p $v
           p $v->buf
           set $v=$v->next
       end
       continue
   end
   
   continue
   ```
   
   Output:
   
   ```
   Thread 1 "openresty" hit Breakpoint 1, ngx_http_lua_ngx_echo (L=0x7f7b4139a2a0, newline=<optimized out>) at ../ngx_lua-0.10.21/src/ngx_http_lua_output.c:241
   241         if (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE) {
   $1 = (ngx_chain_t *) 0x55bedbbd6350
   $2 = (ngx_buf_t *) 0x55bedbbd6360
   $3 = (ngx_chain_t *) 0x55bedbbd69e8
   $4 = (ngx_buf_t *) 0x55bedbbd6360
   
   Thread 1 "openresty" hit Breakpoint 2, ngx_http_lua_ngx_flush (L=0x7f7b4139a2a0) at ../ngx_lua-0.10.21/src/ngx_http_lua_output.c:556
   556         if (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE) {
   $5 = (ngx_chain_t *) 0x55bedbbd6350
   $6 = (ngx_buf_t *) 0x55bedbbd6360
   $7 = (ngx_chain_t *) 0x55bedbbd69f8
   $8 = (ngx_buf_t *) 0x55bedbbd6360
   $9 = (ngx_chain_t *) 0x55bedbbd69e8
   $10 = (ngx_buf_t *) 0x55bedbbd6360
   
   Thread 1 "openresty" hit Breakpoint 1, ngx_http_lua_ngx_echo (L=0x7f7b4139a2a0, newline=<optimized out>) at ../ngx_lua-0.10.21/src/ngx_http_lua_output.c:241
   241         if (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE) {
   $11 = (ngx_chain_t *) 0x55bedbbd69f8
   $12 = (ngx_buf_t *) 0x55bedbbd6360
   $13 = (ngx_chain_t *) 0x55bedbbd69e8
   $14 = (ngx_buf_t *) 0x55bedbbd6360
   
   Thread 1 "openresty" hit Breakpoint 2, ngx_http_lua_ngx_flush (L=0x7f7b4139a2a0) at ../ngx_lua-0.10.21/src/ngx_http_lua_output.c:556
   556         if (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE) {
   $15 = (ngx_chain_t *) 0x55bedbbd69f8
   $16 = (ngx_buf_t *) 0x55bedbbd6360
   $17 = (ngx_chain_t *) 0x55bedbbd6350
   $18 = (ngx_buf_t *) 0x55bedbbd6360
   $19 = (ngx_chain_t *) 0x55bedbbe1a18
   $20 = (ngx_buf_t *) 0x55bedbbd6360
   $21 = (ngx_chain_t *) 0x55bedbbe19e0
   $22 = (ngx_buf_t *) 0x55bedbbd6360
   $23 = (ngx_chain_t *) 0x55bedbbd69e8
   $24 = (ngx_buf_t *) 0x55bedbbd6360
   ```
   
   **Note that all free bufs points to the same address `0x55bedbbd6360`!!!**
   
   ## workaround before bugfix
   
   Comment `ngx.flush`, in fact, it does not affect the final result, because when the main request finishes, all chunks would be send to downstream. And it even does not affect performance! Because as long as your chunk is bigger than `postpone_output`, it would be send directly via socket, no matter whether you flush or not later.
   
   ```
   local function send_chunk(chunk)
       if not chunk then
           return nil
       end
   
       local ok, print_err = ngx_print(chunk)
       if not ok then
           return "output response failed: ".. (print_err or "")
       end
       --local ok, flush_err = ngx_flush(true)
       --if not ok then
       --    core.log.warn("flush response failed: ", flush_err)
       --end
   
       return nil
   end
   ```
   
   ### Expected Behavior
   
   _No response_
   
   ### Error Logs
   
   _No response_
   
   ### Steps to Reproduce
   
   `go_chunked_http_server.go`
   
   ```go
   package main
   
   import (
       "io"
       "net"
       "net/http"
   )
   
   func handle(w http.ResponseWriter, req *http.Request) {
       w.Header().Set("Transfer-Encoding", "chunked")
       buf := make([]byte, 10000)
       io.WriteString(w, string(buf[:1450]))
       if f, ok := w.(http.Flusher); ok {
           f.Flush()
       }
       io.WriteString(w, string(buf[:13]))
   }
   
   func main() {
       addr := "127.0.0.1:9094"
       l, err := net.Listen("tcp", addr)
       if err != nil {
           panic("AAAAH")
       }
   
       server := http.Server{
           Handler: http.HandlerFunc(handle),
       }
       server.Serve(l)
   }
   
   ```
   
   ```bash
   go run go_chunked_http_server.go
   
   curl -i http://127.0.0.1:9180/apisix/admin/routes/chunked_bug  \
   -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
   {
       "uri": "/stream-bytes/*",
       "plugins": {
           "ext-plugin-post-resp": {
               "conf" : [
                   {"name": "ext-plugin-A", "value": "{\"enable\":\"feature\"}"}
               ]
           }
       },
       "upstream": {
           "type": "roundrobin",
           "nodes": {
               "127.0.0.1:9094": 1
           }
       }
   }'
   
   curl -s -vvv localhost:9080/stream-bytes/8205 -o /tmp/chunked.response
   * TCP_NODELAY set
   * Connected to localhost (::1) port 9080 (#0)
   > GET /stream-bytes/8205 HTTP/1.1
   > Host: localhost:9080
   > User-Agent: curl/7.68.0
   > Accept: */*
   >
   * Mark bundle as not supporting multiuse
   < HTTP/1.1 200 OK
   < Date: Wed, 26 Oct 2022 09:25:35 GMT
   < Content-Type: text/plain; charset=utf-8
   < Transfer-Encoding: chunked
   < Connection: keep-alive
   < Server: APISIX/2.99.0
   <
   { [1467 bytes data]
   100  1457    0  1457    0     0     24      0 --:--:--  0:01:00 --:--:--     0* transfer closed with outstanding read data remaining
   100  1457    0  1457    0     0     24      0 --:--:--  0:01:00 --:--:--     0
   * Closing connection 0
   curl: (18) transfer closed with outstanding read data remaining
   
   ```
   
   ### Environment
   
   - APISIX version (run `apisix version`): master branch
   - Operating system (run `uname -a`):
   - OpenResty / Nginx version (run `openresty -V` or `nginx -V`):
   - etcd version, if relevant (run `curl http://127.0.0.1:9090/v1/server_info`):
   - APISIX Dashboard version, if relevant:
   - Plugin runner version, for issues related to plugin runners:
   - LuaRocks version, for installation issues (run `luarocks --version`):
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [apisix] kingluo closed issue #8181: bug: enable ext-plugin-post-resp plugin may lost last downstream response chunk

Posted by GitBox <gi...@apache.org>.
kingluo closed issue #8181: bug: enable ext-plugin-post-resp plugin may lost last downstream response chunk
URL: https://github.com/apache/apisix/issues/8181


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org