You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@trafficserver.apache.org by "Uri Shachar (Created) (JIRA)" <ji...@apache.org> on 2011/11/24 10:18:39 UTC

[jira] [Created] (TS-1032) Assertion when upstream connection is established (with event handled by thread A) and immediately disconnected (handled by thread B)

Assertion when upstream connection is established (with event handled by thread A) and immediately disconnected (handled by thread B)
-------------------------------------------------------------------------------------------------------------------------------------

                 Key: TS-1032
                 URL: https://issues.apache.org/jira/browse/TS-1032
             Project: Traffic Server
          Issue Type: Bug
          Components: Core, HTTP
    Affects Versions: 3.1.1
         Environment: Linux 32bit CentOS 5.4. Pre-open source version of ATS.
            Reporter: Uri Shachar


This happened twice on a very old version of ATS (pre opensource code), but it looks like it can happen in current ATS as well (it's a very rare race condition, haven't been able to reproduce).

Scenario:
	1)      Client request arrives, handled by TS thread 1 and is reenabled by a plugin (Inside a continuation called by ContSched)
	2)      TS thread 2 starts to connect upstream
	3)      A client disconnection event is placed in thread 1 queue.
	4)      A successful connection event is placed in thread 2 queue.
	5)      Thread 1 starts to handle pending events (setting cur_time to X)
	6)      Thread 2 starts to handle pending events (setting cur_time to Z=X+Y)
	7)      Thread 2 handles the connection established event (setting server_first_connect to Z)
	8)      Thread 1 handles the client disconnection event - Getting a negative wait and asserting...

Sample stack trace:

Program received signal SIGABRT, Aborted.
[Switching to Thread 0xe3131b90 (LWP 14584)]
0xffffe410 in __kernel_vsyscall ()
#0  0xffffe410 in __kernel_vsyscall ()
#1  0x007e2df0 in raise () from /lib/libc.so.6
#2  0x007e484e in abort () from /lib/libc.so.6
#3  0x08427612 in ink_die_die_die (retval=1) at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_error.cc:45
#4  0x08427778 in ink_fatal_va (return_code=1, message_format=0xe312ee1f "/tmp/ushachar-rpmbuild/BUILD/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:5572: failed assert `wait >= 0`", ap=0xe312ee08 "\002") at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_error.cc:100
#5  0x084277d3 in ink_fatal (return_code=1, message_format=0xe312ee1f "/tmp/ushachar-rpmbuild/BUILD/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:5572: failed assert `wait >= 0`") at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_error.cc:111
#6  0x08424508 in _ink_assert (a=0x853db72 "wait >= 0", f=0x853ab3c "/tmp/ushachar-rpmbuild/BUILD/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc", l=5572) at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_assert.cc:27
#7  0x082f2505 in HttpSM::mark_server_down_on_client_abort (this=0xb622ece0) at /usr/src/debug/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:5572
#8  0x082f6080 in HttpSM::state_watch_for_client_abort (this=0xb622ece0, event=3, data=0x7e0e2a88) at /usr/src/debug/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:1148
#9  0x082fad0f in HttpSM::main_handler (this=0xb622ece0, event=3, data=0x7e0e2a88) at /usr/src/debug/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:3213
#10 0x0810a07b in Continuation::handleEvent (this=0xb622ece0, event=3, data=0x7e0e2a88) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/Continuation.h:85
#11 0x083ab348 in read_signal_and_update (event=3, vc=0x7e0e2a30) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:262
#12 0x083ab3fe in read_signal_done (event=3, nh=0xa339b28, vc=0x7e0e2a30) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:300
#13 0x083ab44f in read_signal_error (nh=0xa339b28, vc=0x7e0e2a30, lerrno=104) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:324
#14 0x083ae1c5 in read_from_net (nh=0xa339b28, vc=0x7e0e2a30, thread=0xa32e490) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:783
#15 0x083ae5a7 in UnixNetVConnection::net_read_io (this=0x7e0e2a30, nh=0xa339b28, lthread=0xa32e490) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:1059
#16 0x083adced in NetHandler::mainNetEvent (this=0xa339b28, event=5, e=0xa1ab810) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:1272
#17 0x0810a07b in Continuation::handleEvent (this=0xa339b28, event=5, data=0xa1ab810) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/Continuation.h:85
#18 0x083a19ac in EThread::process_event (this=0xa32e490, e=0xa1ab810, calling_code=5) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixEThread.cc:132
#19 0x0839f800 in EThread::execute (this=0xa32e490) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixEThread.cc:315
#20 0x083b4f9a in spawn_thread_internal (a=0xa385360) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixThread.cc:71
#21 0x009065ab in start_thread () from /lib/libpthread.so.0
#22 0x0088bcfe in clone () from /lib/libc.so.6

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Assigned] (TS-1032) Assertion when upstream connection is established (with event handled by thread A) and immediately disconnected (handled by thread B)

Posted by "Leif Hedstrom (Assigned) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/TS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Leif Hedstrom reassigned TS-1032:
---------------------------------

    Assignee: Leif Hedstrom
    
> Assertion when upstream connection is established (with event handled by thread A) and immediately disconnected (handled by thread B)
> -------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: TS-1032
>                 URL: https://issues.apache.org/jira/browse/TS-1032
>             Project: Traffic Server
>          Issue Type: Bug
>          Components: Core, HTTP
>    Affects Versions: 3.1.1
>         Environment: Linux 32bit CentOS 5.4. Pre-open source version of ATS.
>            Reporter: Uri Shachar
>            Assignee: Leif Hedstrom
>             Fix For: 3.1.2
>
>         Attachments: wait_patch.diff
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> This happened twice on a very old version of ATS (pre opensource code), but it looks like it can happen in current ATS as well (it's a very rare race condition, haven't been able to reproduce).
> Scenario:
> 	1)      Client request arrives, handled by TS thread 1 and is reenabled by a plugin (Inside a continuation called by ContSched)
> 	2)      TS thread 2 starts to connect upstream
> 	3)      A client disconnection event is placed in thread 1 queue.
> 	4)      A successful connection event is placed in thread 2 queue.
> 	5)      Thread 1 starts to handle pending events (setting cur_time to X)
> 	6)      Thread 2 starts to handle pending events (setting cur_time to Z=X+Y)
> 	7)      Thread 2 handles the connection established event (setting server_first_connect to Z)
> 	8)      Thread 1 handles the client disconnection event - Getting a negative wait and asserting...
> Sample stack trace:
> Program received signal SIGABRT, Aborted.
> [Switching to Thread 0xe3131b90 (LWP 14584)]
> 0xffffe410 in __kernel_vsyscall ()
> #0  0xffffe410 in __kernel_vsyscall ()
> #1  0x007e2df0 in raise () from /lib/libc.so.6
> #2  0x007e484e in abort () from /lib/libc.so.6
> #3  0x08427612 in ink_die_die_die (retval=1) at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_error.cc:45
> #4  0x08427778 in ink_fatal_va (return_code=1, message_format=0xe312ee1f "/tmp/ushachar-rpmbuild/BUILD/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:5572: failed assert `wait >= 0`", ap=0xe312ee08 "\002") at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_error.cc:100
> #5  0x084277d3 in ink_fatal (return_code=1, message_format=0xe312ee1f "/tmp/ushachar-rpmbuild/BUILD/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:5572: failed assert `wait >= 0`") at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_error.cc:111
> #6  0x08424508 in _ink_assert (a=0x853db72 "wait >= 0", f=0x853ab3c "/tmp/ushachar-rpmbuild/BUILD/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc", l=5572) at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_assert.cc:27
> #7  0x082f2505 in HttpSM::mark_server_down_on_client_abort (this=0xb622ece0) at /usr/src/debug/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:5572
> #8  0x082f6080 in HttpSM::state_watch_for_client_abort (this=0xb622ece0, event=3, data=0x7e0e2a88) at /usr/src/debug/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:1148
> #9  0x082fad0f in HttpSM::main_handler (this=0xb622ece0, event=3, data=0x7e0e2a88) at /usr/src/debug/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:3213
> #10 0x0810a07b in Continuation::handleEvent (this=0xb622ece0, event=3, data=0x7e0e2a88) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/Continuation.h:85
> #11 0x083ab348 in read_signal_and_update (event=3, vc=0x7e0e2a30) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:262
> #12 0x083ab3fe in read_signal_done (event=3, nh=0xa339b28, vc=0x7e0e2a30) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:300
> #13 0x083ab44f in read_signal_error (nh=0xa339b28, vc=0x7e0e2a30, lerrno=104) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:324
> #14 0x083ae1c5 in read_from_net (nh=0xa339b28, vc=0x7e0e2a30, thread=0xa32e490) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:783
> #15 0x083ae5a7 in UnixNetVConnection::net_read_io (this=0x7e0e2a30, nh=0xa339b28, lthread=0xa32e490) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:1059
> #16 0x083adced in NetHandler::mainNetEvent (this=0xa339b28, event=5, e=0xa1ab810) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:1272
> #17 0x0810a07b in Continuation::handleEvent (this=0xa339b28, event=5, data=0xa1ab810) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/Continuation.h:85
> #18 0x083a19ac in EThread::process_event (this=0xa32e490, e=0xa1ab810, calling_code=5) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixEThread.cc:132
> #19 0x0839f800 in EThread::execute (this=0xa32e490) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixEThread.cc:315
> #20 0x083b4f9a in spawn_thread_internal (a=0xa385360) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixThread.cc:71
> #21 0x009065ab in start_thread () from /lib/libpthread.so.0
> #22 0x0088bcfe in clone () from /lib/libc.so.6

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (TS-1032) Assertion when upstream connection is established (with event handled by thread A) and immediately disconnected (handled by thread B)

Posted by "Leif Hedstrom (Updated) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/TS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Leif Hedstrom updated TS-1032:
------------------------------

    Fix Version/s: 3.1.2
    
> Assertion when upstream connection is established (with event handled by thread A) and immediately disconnected (handled by thread B)
> -------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: TS-1032
>                 URL: https://issues.apache.org/jira/browse/TS-1032
>             Project: Traffic Server
>          Issue Type: Bug
>          Components: Core, HTTP
>    Affects Versions: 3.1.1
>         Environment: Linux 32bit CentOS 5.4. Pre-open source version of ATS.
>            Reporter: Uri Shachar
>            Assignee: Leif Hedstrom
>             Fix For: 3.1.2
>
>         Attachments: wait_patch.diff
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> This happened twice on a very old version of ATS (pre opensource code), but it looks like it can happen in current ATS as well (it's a very rare race condition, haven't been able to reproduce).
> Scenario:
> 	1)      Client request arrives, handled by TS thread 1 and is reenabled by a plugin (Inside a continuation called by ContSched)
> 	2)      TS thread 2 starts to connect upstream
> 	3)      A client disconnection event is placed in thread 1 queue.
> 	4)      A successful connection event is placed in thread 2 queue.
> 	5)      Thread 1 starts to handle pending events (setting cur_time to X)
> 	6)      Thread 2 starts to handle pending events (setting cur_time to Z=X+Y)
> 	7)      Thread 2 handles the connection established event (setting server_first_connect to Z)
> 	8)      Thread 1 handles the client disconnection event - Getting a negative wait and asserting...
> Sample stack trace:
> Program received signal SIGABRT, Aborted.
> [Switching to Thread 0xe3131b90 (LWP 14584)]
> 0xffffe410 in __kernel_vsyscall ()
> #0  0xffffe410 in __kernel_vsyscall ()
> #1  0x007e2df0 in raise () from /lib/libc.so.6
> #2  0x007e484e in abort () from /lib/libc.so.6
> #3  0x08427612 in ink_die_die_die (retval=1) at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_error.cc:45
> #4  0x08427778 in ink_fatal_va (return_code=1, message_format=0xe312ee1f "/tmp/ushachar-rpmbuild/BUILD/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:5572: failed assert `wait >= 0`", ap=0xe312ee08 "\002") at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_error.cc:100
> #5  0x084277d3 in ink_fatal (return_code=1, message_format=0xe312ee1f "/tmp/ushachar-rpmbuild/BUILD/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:5572: failed assert `wait >= 0`") at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_error.cc:111
> #6  0x08424508 in _ink_assert (a=0x853db72 "wait >= 0", f=0x853ab3c "/tmp/ushachar-rpmbuild/BUILD/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc", l=5572) at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_assert.cc:27
> #7  0x082f2505 in HttpSM::mark_server_down_on_client_abort (this=0xb622ece0) at /usr/src/debug/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:5572
> #8  0x082f6080 in HttpSM::state_watch_for_client_abort (this=0xb622ece0, event=3, data=0x7e0e2a88) at /usr/src/debug/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:1148
> #9  0x082fad0f in HttpSM::main_handler (this=0xb622ece0, event=3, data=0x7e0e2a88) at /usr/src/debug/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:3213
> #10 0x0810a07b in Continuation::handleEvent (this=0xb622ece0, event=3, data=0x7e0e2a88) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/Continuation.h:85
> #11 0x083ab348 in read_signal_and_update (event=3, vc=0x7e0e2a30) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:262
> #12 0x083ab3fe in read_signal_done (event=3, nh=0xa339b28, vc=0x7e0e2a30) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:300
> #13 0x083ab44f in read_signal_error (nh=0xa339b28, vc=0x7e0e2a30, lerrno=104) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:324
> #14 0x083ae1c5 in read_from_net (nh=0xa339b28, vc=0x7e0e2a30, thread=0xa32e490) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:783
> #15 0x083ae5a7 in UnixNetVConnection::net_read_io (this=0x7e0e2a30, nh=0xa339b28, lthread=0xa32e490) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:1059
> #16 0x083adced in NetHandler::mainNetEvent (this=0xa339b28, event=5, e=0xa1ab810) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:1272
> #17 0x0810a07b in Continuation::handleEvent (this=0xa339b28, event=5, data=0xa1ab810) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/Continuation.h:85
> #18 0x083a19ac in EThread::process_event (this=0xa32e490, e=0xa1ab810, calling_code=5) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixEThread.cc:132
> #19 0x0839f800 in EThread::execute (this=0xa32e490) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixEThread.cc:315
> #20 0x083b4f9a in spawn_thread_internal (a=0xa385360) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixThread.cc:71
> #21 0x009065ab in start_thread () from /lib/libpthread.so.0
> #22 0x0088bcfe in clone () from /lib/libc.so.6

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (TS-1032) Assertion when upstream connection is established (with event handled by thread A) and immediately disconnected (handled by thread B)

Posted by "weijin (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/TS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13166876#comment-13166876 ] 

weijin commented on TS-1032:
----------------------------

cool. 
                
> Assertion when upstream connection is established (with event handled by thread A) and immediately disconnected (handled by thread B)
> -------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: TS-1032
>                 URL: https://issues.apache.org/jira/browse/TS-1032
>             Project: Traffic Server
>          Issue Type: Bug
>          Components: Core, HTTP
>    Affects Versions: 3.1.1
>         Environment: Linux 32bit CentOS 5.4. Pre-open source version of ATS.
>            Reporter: Uri Shachar
>            Assignee: Leif Hedstrom
>             Fix For: 3.1.2
>
>         Attachments: wait_patch.diff
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> This happened twice on a very old version of ATS (pre opensource code), but it looks like it can happen in current ATS as well (it's a very rare race condition, haven't been able to reproduce).
> Scenario:
> 	1)      Client request arrives, handled by TS thread 1 and is reenabled by a plugin (Inside a continuation called by ContSched)
> 	2)      TS thread 2 starts to connect upstream
> 	3)      A client disconnection event is placed in thread 1 queue.
> 	4)      A successful connection event is placed in thread 2 queue.
> 	5)      Thread 1 starts to handle pending events (setting cur_time to X)
> 	6)      Thread 2 starts to handle pending events (setting cur_time to Z=X+Y)
> 	7)      Thread 2 handles the connection established event (setting server_first_connect to Z)
> 	8)      Thread 1 handles the client disconnection event - Getting a negative wait and asserting...
> Sample stack trace:
> Program received signal SIGABRT, Aborted.
> [Switching to Thread 0xe3131b90 (LWP 14584)]
> 0xffffe410 in __kernel_vsyscall ()
> #0  0xffffe410 in __kernel_vsyscall ()
> #1  0x007e2df0 in raise () from /lib/libc.so.6
> #2  0x007e484e in abort () from /lib/libc.so.6
> #3  0x08427612 in ink_die_die_die (retval=1) at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_error.cc:45
> #4  0x08427778 in ink_fatal_va (return_code=1, message_format=0xe312ee1f "/tmp/ushachar-rpmbuild/BUILD/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:5572: failed assert `wait >= 0`", ap=0xe312ee08 "\002") at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_error.cc:100
> #5  0x084277d3 in ink_fatal (return_code=1, message_format=0xe312ee1f "/tmp/ushachar-rpmbuild/BUILD/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:5572: failed assert `wait >= 0`") at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_error.cc:111
> #6  0x08424508 in _ink_assert (a=0x853db72 "wait >= 0", f=0x853ab3c "/tmp/ushachar-rpmbuild/BUILD/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc", l=5572) at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_assert.cc:27
> #7  0x082f2505 in HttpSM::mark_server_down_on_client_abort (this=0xb622ece0) at /usr/src/debug/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:5572
> #8  0x082f6080 in HttpSM::state_watch_for_client_abort (this=0xb622ece0, event=3, data=0x7e0e2a88) at /usr/src/debug/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:1148
> #9  0x082fad0f in HttpSM::main_handler (this=0xb622ece0, event=3, data=0x7e0e2a88) at /usr/src/debug/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:3213
> #10 0x0810a07b in Continuation::handleEvent (this=0xb622ece0, event=3, data=0x7e0e2a88) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/Continuation.h:85
> #11 0x083ab348 in read_signal_and_update (event=3, vc=0x7e0e2a30) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:262
> #12 0x083ab3fe in read_signal_done (event=3, nh=0xa339b28, vc=0x7e0e2a30) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:300
> #13 0x083ab44f in read_signal_error (nh=0xa339b28, vc=0x7e0e2a30, lerrno=104) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:324
> #14 0x083ae1c5 in read_from_net (nh=0xa339b28, vc=0x7e0e2a30, thread=0xa32e490) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:783
> #15 0x083ae5a7 in UnixNetVConnection::net_read_io (this=0x7e0e2a30, nh=0xa339b28, lthread=0xa32e490) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:1059
> #16 0x083adced in NetHandler::mainNetEvent (this=0xa339b28, event=5, e=0xa1ab810) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:1272
> #17 0x0810a07b in Continuation::handleEvent (this=0xa339b28, event=5, data=0xa1ab810) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/Continuation.h:85
> #18 0x083a19ac in EThread::process_event (this=0xa32e490, e=0xa1ab810, calling_code=5) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixEThread.cc:132
> #19 0x0839f800 in EThread::execute (this=0xa32e490) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixEThread.cc:315
> #20 0x083b4f9a in spawn_thread_internal (a=0xa385360) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixThread.cc:71
> #21 0x009065ab in start_thread () from /lib/libpthread.so.0
> #22 0x0088bcfe in clone () from /lib/libc.so.6

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (TS-1032) Assertion when upstream connection is established (with event handled by thread A) and immediately disconnected (handled by thread B)

Posted by "Uri Shachar (Updated) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/TS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Uri Shachar updated TS-1032:
----------------------------

    Attachment: wait_patch.diff
    
> Assertion when upstream connection is established (with event handled by thread A) and immediately disconnected (handled by thread B)
> -------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: TS-1032
>                 URL: https://issues.apache.org/jira/browse/TS-1032
>             Project: Traffic Server
>          Issue Type: Bug
>          Components: Core, HTTP
>    Affects Versions: 3.1.1
>         Environment: Linux 32bit CentOS 5.4. Pre-open source version of ATS.
>            Reporter: Uri Shachar
>         Attachments: wait_patch.diff
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> This happened twice on a very old version of ATS (pre opensource code), but it looks like it can happen in current ATS as well (it's a very rare race condition, haven't been able to reproduce).
> Scenario:
> 	1)      Client request arrives, handled by TS thread 1 and is reenabled by a plugin (Inside a continuation called by ContSched)
> 	2)      TS thread 2 starts to connect upstream
> 	3)      A client disconnection event is placed in thread 1 queue.
> 	4)      A successful connection event is placed in thread 2 queue.
> 	5)      Thread 1 starts to handle pending events (setting cur_time to X)
> 	6)      Thread 2 starts to handle pending events (setting cur_time to Z=X+Y)
> 	7)      Thread 2 handles the connection established event (setting server_first_connect to Z)
> 	8)      Thread 1 handles the client disconnection event - Getting a negative wait and asserting...
> Sample stack trace:
> Program received signal SIGABRT, Aborted.
> [Switching to Thread 0xe3131b90 (LWP 14584)]
> 0xffffe410 in __kernel_vsyscall ()
> #0  0xffffe410 in __kernel_vsyscall ()
> #1  0x007e2df0 in raise () from /lib/libc.so.6
> #2  0x007e484e in abort () from /lib/libc.so.6
> #3  0x08427612 in ink_die_die_die (retval=1) at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_error.cc:45
> #4  0x08427778 in ink_fatal_va (return_code=1, message_format=0xe312ee1f "/tmp/ushachar-rpmbuild/BUILD/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:5572: failed assert `wait >= 0`", ap=0xe312ee08 "\002") at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_error.cc:100
> #5  0x084277d3 in ink_fatal (return_code=1, message_format=0xe312ee1f "/tmp/ushachar-rpmbuild/BUILD/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:5572: failed assert `wait >= 0`") at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_error.cc:111
> #6  0x08424508 in _ink_assert (a=0x853db72 "wait >= 0", f=0x853ab3c "/tmp/ushachar-rpmbuild/BUILD/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc", l=5572) at /usr/src/debug/wts/proxy/ts/traffic/libwebsense++/ink_assert.cc:27
> #7  0x082f2505 in HttpSM::mark_server_down_on_client_abort (this=0xb622ece0) at /usr/src/debug/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:5572
> #8  0x082f6080 in HttpSM::state_watch_for_client_abort (this=0xb622ece0, event=3, data=0x7e0e2a88) at /usr/src/debug/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:1148
> #9  0x082fad0f in HttpSM::main_handler (this=0xb622ece0, event=3, data=0x7e0e2a88) at /usr/src/debug/wts/proxy/ts/traffic/proxy/http2/HttpSM.cc:3213
> #10 0x0810a07b in Continuation::handleEvent (this=0xb622ece0, event=3, data=0x7e0e2a88) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/Continuation.h:85
> #11 0x083ab348 in read_signal_and_update (event=3, vc=0x7e0e2a30) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:262
> #12 0x083ab3fe in read_signal_done (event=3, nh=0xa339b28, vc=0x7e0e2a30) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:300
> #13 0x083ab44f in read_signal_error (nh=0xa339b28, vc=0x7e0e2a30, lerrno=104) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:324
> #14 0x083ae1c5 in read_from_net (nh=0xa339b28, vc=0x7e0e2a30, thread=0xa32e490) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:783
> #15 0x083ae5a7 in UnixNetVConnection::net_read_io (this=0x7e0e2a30, nh=0xa339b28, lthread=0xa32e490) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:1059
> #16 0x083adced in NetHandler::mainNetEvent (this=0xa339b28, event=5, e=0xa1ab810) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixNet.cc:1272
> #17 0x0810a07b in Continuation::handleEvent (this=0xa339b28, event=5, data=0xa1ab810) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/Continuation.h:85
> #18 0x083a19ac in EThread::process_event (this=0xa32e490, e=0xa1ab810, calling_code=5) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixEThread.cc:132
> #19 0x0839f800 in EThread::execute (this=0xa32e490) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixEThread.cc:315
> #20 0x083b4f9a in spawn_thread_internal (a=0xa385360) at /usr/src/debug/wts/proxy/ts/traffic/proxy/iocore/UnixThread.cc:71
> #21 0x009065ab in start_thread () from /lib/libpthread.so.0
> #22 0x0088bcfe in clone () from /lib/libc.so.6

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira