You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@subversion.apache.org by "Jakub Stroleny (JIRA)" <ji...@apache.org> on 2018/06/28 08:50:00 UTC

[jira] [Comment Edited] (SVN-4626) Deadlock-like behaviour of svnserve in multithreaded mode (-T)

    [ https://issues.apache.org/jira/browse/SVN-4626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16526098#comment-16526098 ] 

Jakub Stroleny edited comment on SVN-4626 at 6/28/18 8:49 AM:
--------------------------------------------------------------

Hello,

we have observed mentioned behavior again and here are exact steps how to reproduce the issue:

1) Setup svnserve on Windows machine (use following command: svnserve -d -r C:\svn\repo -M 0 --listen-host 127.0.0.1)
 - we found out that the issue happens only when svnserve is run on Windows, I've tried on Linux and the issue is not reproducible.

2) Use attached application with source code to reproduce the issue

The application is the simplest scenario that we found during our analysis. It uses SVNKit library to connect with svnserve. It creates 5 immediate connections to svnserve and client call get-latest-rev operation. The client gets stuck sometimes (3 out of 5 runs) on connection read. You need to run attached application multiple times to reproduce the issue.

The application gets stuck randomly when printing latest revision to console. It will not reach the fifth print every time. I have debug it and the remaining threads are stuck on connection read. The svnserve have not sent any response back until we close some existing connection to it. We did also analysis using network dumps and process monitor, but there is no visible traffic seen - see discussion around it with my colleague: [users discussion|http://subversion.1072662.n5.nabble.com/Deadlock-like-behaviour-of-svnserve-in-multi-threaded-mode-T-td196421.html]

The threads are stuck outside on connection read from svnserve:

SocketInputStream.socketRead0(FileDescriptor, byte[], int, int, int) line: not available [native method]

SocketInputStream.socketRead(FileDescriptor, byte[], int, int, int) line: 116

SocketInputStream.read(byte[], int, int, int) line: 171    

...etc..

 

Here is client application to reproduce the issue:

[^repo-client-1.0-SNAPSHOT.zip]

[^repo-client-1.0-SNAPSHOT-sources.jar]

Can you give us some hint or some more information what can be wrong there?

If you need more information please let me know.


was (Author: jakub.stroleny):
Hello,

we have observed mentioned behavior again and here are exact steps how to reproduce the issue:

1) Setup svnserve on Windows machine (use following command: svnserve -d -r C:\svn\repo -M 0 --listen-host 127.0.0.1)

- we found out that the issue happens only when svnserve is run on Windows, I've tried on Linux and the issue is not reproducible.

2) Use attached application with source code to reproduce the issue

The application is the simplest scenario that we found during our analysis. It uses SVNKit library to connect with svnserve. It creates 5 immediate connections to svnserve and client call get-latest-rev operation. The client gets stuck sometimes (3 out of 5 runs) on connection read. You need to run attached application multiple times to reproduce the issue.

The application gets stuck randomly when printing latest revision to console. It will not reach the fifth print every time. I have debug it and the remaining threads are stuck on connection read. The svnserve have not sent any response back until we close some existing connection to it. We did also analysis using network dumps and process monitor, but there is no visible traffic seen - see discussion around it with my colleague: [users discussion|http://subversion.1072662.n5.nabble.com/Deadlock-like-behaviour-of-svnserve-in-multi-threaded-mode-T-td196421.html]

The threads are stuck outside on connection read from svnserve:

SocketInputStream.socketRead0(FileDescriptor, byte[], int, int, int) line: not available [native method]

SocketInputStream.socketRead(FileDescriptor, byte[], int, int, int) line: 116

SocketInputStream.read(byte[], int, int, int) line: 171    

...etc..

 

Here is client application to reproduce the issue:

[^repo-client-1.0-SNAPSHOT.zip]

[^repo-client-1.0-SNAPSHOT-sources.jar]

Can you give us some hint or some more information what can be wrong there?

> Deadlock-like behaviour of svnserve in multithreaded mode (-T)
> --------------------------------------------------------------
>
>                 Key: SVN-4626
>                 URL: https://issues.apache.org/jira/browse/SVN-4626
>             Project: Subversion
>          Issue Type: Bug
>          Components: svnserve
>    Affects Versions: 1.9.3
>         Environment: Windows 10, CentOS 6.6
>            Reporter: Roman Kratochvil
>            Priority: Major
>         Attachments: repo-client-1.0-SNAPSHOT-sources.jar, repo-client-1.0-SNAPSHOT.zip
>
>
> Our application generates lot of concurrent read requests to subversion using svn: protocol. When we tested the multithreaded mode of svnserve after upgrade to 1.9.3, we noticed strange 'deadlock-like' behaviour: at some point all the requests are blocked in svnserve and wait there for a few minutes (3 to 5 minutes, no CPU activity), after which they continue to work. This is making our application significantly slower. We observed this behaviour on both Windows 10 and CentOS 6.6.
> The workaround is to run svnserve without -T switch, i.e. not using multithreaded mode.
> Here is a sample of thread dump of svnserve.exe during the 'deadlock' obtained on Windows 10 using Process Explorer:
> ntoskrnl.exe!KeSynchronizeExecution+0x3de6
> ntoskrnl.exe!KeWaitForMutexObject+0xc7a
> ntoskrnl.exe!KeWaitForMutexObject+0x709
> ntoskrnl.exe!KeWaitForMutexObject+0x375
> ntoskrnl.exe!IoThreadToProcess+0xff0
> ntoskrnl.exe!KeRemoveQueueEx+0x16ba
> ntoskrnl.exe!KeWaitForMutexObject+0xe8e
> ntoskrnl.exe!KeWaitForMutexObject+0x709
> ntoskrnl.exe!KeWaitForMutexObject+0x375
> ntoskrnl.exe!NtWaitForSingleObject+0xf2
> ntoskrnl.exe!setjmpex+0x3963
> ntdll.dll!NtWaitForSingleObject+0x14
> MSWSOCK.dll!Tcpip6_WSHSetSocketInformation+0x155
> MSWSOCK.dll+0x1bf1
> WS2_32.dll!WSAAccept+0xce
> WS2_32.dll!accept+0x12
> libapr-1.dll!apr_socket_accept+0x46
> svnserve.exe+0xc11c
> svnserve.exe+0xbae5
> svnserve.exe+0xaf6c
> svnserve.exe+0x13ab
> KERNEL32.DLL!BaseThreadInitThunk+0x22
> ntdll.dll!RtlUserThreadStart+0x34
> The similar stack can be seen with other threads too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)