You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@trafficserver.apache.org by "Bryan Call (JIRA)" <ji...@apache.org> on 2012/10/31 00:26:15 UTC

[jira] [Comment Edited] (TS-1538) TS hangs (dead lock) on HTTPS POST/PROPFIND requests

    [ https://issues.apache.org/jira/browse/TS-1538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13487365#comment-13487365 ] 

Bryan Call edited comment on TS-1538 at 10/30/12 11:25 PM:
-----------------------------------------------------------

Seeing problems where the same mutex is being used in SSLNetVConnection::net_read_io() for the MUTEX_TRY_LOCK_FOR() call across different VIOs on different threads.

Digging into this I am seeing that UnixNetVConnection::do_io_read() is setting the continuation and the mutex for for the read VIO.  This is being call from SSLNextProtocolAccept::mainEvent() in a round about way and sets its mutex and continuation for the read VIO. Since there is one SSLNextProtocolAccept per SSL port there will be contention issues.  Stack trace follows:

(gdb) bt
#0  UnixNetVConnection::do_io_read (this=0x7fffd4016df0, c=0x12a42b0, nbytes=0, buf=0x12ac7b0) at UnixNetVConnection.cc:525
#1  0x000000000066f109 in do_io (cb=<optimized out>, nbytes=0, c=0x12a42b0, this=0x7fffd4016df0, op=<optimized out>, data=<optimized out>)
    at ../../iocore/eventsystem/P_VConnection.h:102
#2  do_io (cb=<optimized out>, c=0x12a42b0, this=0x7fffd4016df0, op=<optimized out>, nbytes=<optimized out>, data=<optimized out>) at SSLNextProtocolAccept.cc:60
#3  SSLNextProtocolAccept::mainEvent (this=0x12a42b0, event=202, edata=<optimized out>) at SSLNextProtocolAccept.cc:77
#4  0x000000000067ebe2 in handleEvent (data=0x7fffd4016df0, event=202, this=<optimized out>) at ../../iocore/eventsystem/I_Continuation.h:146
#5  UnixNetVConnection::acceptEvent (this=0x7fffd4016df0, event=1, e=<optimized out>) at UnixNetVConnection.cc:971
#6  0x00000000006a1412 in handleEvent (data=0x103af00, event=1, this=<optimized out>) at I_Continuation.h:146
#7  EThread::process_event (this=0x7fffed57f010, e=0x103af00, calling_code=1) at UnixEThread.cc:142
#8  0x00000000006a2213 in EThread::execute (this=0x7fffed57f010) at UnixEThread.cc:193
#9  0x00000000006a01a2 in spawn_thread_internal (a=0x115f3a0) at Thread.cc:88
#10 0x00000038ac407d14 in start_thread (arg=0x7fffecd76700) at pthread_create.c:309
#11 0x00000038abcf167d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115


In net_accpet (for the non-SSL case) there is a new ProxyMutex created for the VIO.  I haven't tested to see if the is the correct fix or not.
                
      was (Author: bcall):
    Seeing problems where the same mutex is being used in SSLNetVConnection::net_read_io() for the MUTEX_TRY_LOCK_FOR() call across different VIOs on different threads.

Digging into this I am seeing that UnixNetVConnection::do_io_read() is setting the continuation and the mutex for for the read VIO.  This is being call from SSLNextProtocolAccept::mainEvent() in a round about way and sets its mutex and continuation for the read VIO. Stack trace follows:

(gdb) bt
#0  UnixNetVConnection::do_io_read (this=0x7fffd4016df0, c=0x12a42b0, nbytes=0, buf=0x12ac7b0) at UnixNetVConnection.cc:525
#1  0x000000000066f109 in do_io (cb=<optimized out>, nbytes=0, c=0x12a42b0, this=0x7fffd4016df0, op=<optimized out>, data=<optimized out>)
    at ../../iocore/eventsystem/P_VConnection.h:102
#2  do_io (cb=<optimized out>, c=0x12a42b0, this=0x7fffd4016df0, op=<optimized out>, nbytes=<optimized out>, data=<optimized out>) at SSLNextProtocolAccept.cc:60
#3  SSLNextProtocolAccept::mainEvent (this=0x12a42b0, event=202, edata=<optimized out>) at SSLNextProtocolAccept.cc:77
#4  0x000000000067ebe2 in handleEvent (data=0x7fffd4016df0, event=202, this=<optimized out>) at ../../iocore/eventsystem/I_Continuation.h:146
#5  UnixNetVConnection::acceptEvent (this=0x7fffd4016df0, event=1, e=<optimized out>) at UnixNetVConnection.cc:971
#6  0x00000000006a1412 in handleEvent (data=0x103af00, event=1, this=<optimized out>) at I_Continuation.h:146
#7  EThread::process_event (this=0x7fffed57f010, e=0x103af00, calling_code=1) at UnixEThread.cc:142
#8  0x00000000006a2213 in EThread::execute (this=0x7fffed57f010) at UnixEThread.cc:193
#9  0x00000000006a01a2 in spawn_thread_internal (a=0x115f3a0) at Thread.cc:88
#10 0x00000038ac407d14 in start_thread (arg=0x7fffecd76700) at pthread_create.c:309
#11 0x00000038abcf167d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115


In net_accpet (for the non-SSL case) there is a new ProxyMutex created for the VIO.  I haven't tested to see if the is the correct fix or not.

                  
> TS hangs (dead lock) on HTTPS POST/PROPFIND requests
> ----------------------------------------------------
>
>                 Key: TS-1538
>                 URL: https://issues.apache.org/jira/browse/TS-1538
>             Project: Traffic Server
>          Issue Type: Bug
>          Components: Core, HTTP, SSL
>    Affects Versions: 3.2.0
>         Environment: RedHat Enterprise Linux 6.2, Intel 64-bit
>            Reporter: Abhishek Nayani
>            Assignee: Brian Geffon
>            Priority: Blocker
>             Fix For: 3.2.0
>
>
> A very reproducible bug where the body of a HTTPS POST request is never forwarded to the origin server.
> Client submits a HTTPS POST request to TS, which is supposed to forward to the backend/origin server via HTTP.  TS process the HTTP headers and establishes connection to the origin server, but the body of the HTTPS POST is never read.  This hangs until the client times out and shuts down the connection.
> To reproduce:
> # Client connects to TS using HTTPS (works OK if it is just HTTP).
> # It must be a POST request.
> # TS must use at least 2 worker threads.
> # Easier to reproduce when the connections to the origin server is HTTP (not HTTPS).
> # POST body must be large enough so that the HTTP request headers and POST body do *NOT* fit within the same TCP packet. (2000 bytes is a good size)
> # I can consistently reproduce this problem using 2 separate clients each simultaneously submitting 2 requests back to back (i.e., 2 requests from each client, a total of 4 requests).  This gives you a high probability that at least one of the requests would hang.
> Observation:
> # Thread A accepted and processed the HTTP headers, and called "UnixNetProcessor::connect_re" to prepare a new connection to the origin server.
> # Thread A must not have read the body of the POST.  Otherwise, it works fine.
> # Thread B was assigned the task to handle the origin server connection.  If the same thread A was picked, then everything works fine.
> # Apparently, one of the first things that thread B does is to acquire the mutex for reading from the client.  (Why does it do that??)
> # While thread B was holding the mutex, thread A proceeded in "SSLNetVConnection::net_read_io", tried and failed to acquire the mutex.  Thread A typically re-tried calling "SSLNetVConnection::net_read_io" soon, but gave up after the second failure. But if thread B released the mutex soon enough, that thread A could proceed happily and everything works.
> # From this point, the body of the POST is never read from the client, and there is nothing to be proxy'd to the origin server, and both the consumer and producer tasks are never scheduled to run again -- or until the client times out.  I tried setting the client-side time out to as long as 3-5 minutes and TS really does not recover by itself until the client closed the connection.
> This is the first time I uses this bug system.  Please let me know how I could produce the configuration files and trace logs, etc.  Thanks!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira