You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@synapse.apache.org by Hiranya Jayathilaka <hi...@gmail.com> on 2013/11/21 21:08:50 UTC

HTTP Core Performance and Reactor Buffer Size

Hi Devs,

I just found out that the performance of the Synapse Pass Through transport is highly sensitive to the RcvBufferSize of the IO reactors (especially when mediating very large messages). Here are some test results. In this case, I'm simply passing through a 1M message through Synapse to a backend server, which simply echoes it back to the client. Notice how the execution time of the scenario varies with the RcvBufferSize of the IO reactors.

RcvBufferSize (in bytes)                  Scenario Execution Time (in seconds)
========================================================
8192 (Synapse default)                    25.9
16384                                                   0.4
32768                                                   0.2

Is this behavior normal? If so does it make sense to change the Synapse default buffer size to something larger (e.g. 16k)?

Interestingly I see this difference in behavior on Linux only. I cannot see a significant change in behavior on Mac. 

Appreciate your thoughts on this.

Thanks,
Hiranya

--
Hiranya Jayathilaka
Mayhem Lab/RACE Lab;
Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
Blog: http://techfeast-hiranya.blogspot.com


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Sun, 2013-11-24 at 11:49 -0800, Hiranya Jayathilaka wrote:
> Hi Andreas, Hi Oleg,
> 
> 
> This is some excellent detective work :) Thanks for looking into this.
> I'm glad that we now have a better understanding of the issue and the
> problems are already being fixed in the httpcore trunk. 
> 
> 
> In the mean time, what if we also fix Synapse to not set these buffer
> size values, unless the user has specifically requested them to be
> set? That is, if the following properties are not set, we can just let
> Synapse pick the system defaults instead of initializing them to 8k:
> 
> 
> http.socket.rcv-buffer-size
> http.socket.snd-buffer-size
> 
> 
> WDYT?
> 

Sounds reasonable to me.

Oleg




---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
For additional commands, e-mail: dev-help@synapse.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Andreas Veithen <an...@gmail.com>.
On Sunday, November 24, 2013, Hiranya Jayathilaka wrote:

> Hi Andreas, Hi Oleg,
>
> This is some excellent detective work :) Thanks for looking into this. I'm
> glad that we now have a better understanding of the issue and the problems
> are already being fixed in the httpcore trunk.
>
> In the mean time, what if we also fix Synapse to not set these buffer size
> values, unless the user has specifically requested them to be set? That is,
> if the following properties are not set, we can just let Synapse pick the
> system defaults instead of initializing them to 8k:
>
> http.socket.rcv-buffer-size
> http.socket.snd-buffer-size
>
> WDYT?
>

+1

Andreas



> Thanks,
> Hiranya
>
> On Nov 24, 2013, at 11:10 AM, Oleg Kalnichevski <ol...@apache.org> wrote:
>
> On Sun, 2013-11-24 at 17:29 +0100, Andreas Veithen wrote:
>
> Oleg,
>
> I had a closer look at the second part of the issue. The problem
> actually occurs between Synapse and the back-end server. The situation
> is quite similar to the first part of the issue: in the SYN packet
> sent by Synapse to the back-end, the TCP window size is set to 43690,
> and once the back-end starts sending the response, the window size
> drops to 8192.
>
> In this case, the problem is that httpcore-nio sets the receive buffer
> size _after_ connecting. With the following patch, the problem
> completely disappears:
>
>
> Andreas,
>
> I reviewed and committed the patch. While this change may theoretically
> break some applications I would consider such possibility pretty
> marginal.
>
> Oleg
>
> Index:
> httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
> ===================================================================
> ---
> httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
> (revision 1544958)
> +++
> httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
> (working copy)
> @@ -176,21 +176,8 @@
>                 }
>                 key.cancel();
>                 if (channel.isConnected()) {
> -                    try {
> -                        try {
> -                            prepareSocket(channel.socket());
> -                        } catch (final IOException ex) {
> -                            if (this.exceptionHandler == null
> -                                    || !this.exceptionHandler.handle(ex))
> {
> -                                throw new IOReactorException(
> -                                        "Failure initalizing socket", ex);
> -                            }
> -                        }
> -                        final ChannelEntry entry = new
> ChannelEntry(channel, sessionRequest);
> -                        addChannel(entry);
> -                    } catch (final IOException ex) {
> -                        sessionRequest.failed(ex);
> -                    }
> +                    final ChannelEntry entry = new
> ChannelEntry(channel, sessionRequest);
> +                    addChannel(entry);
>                 }
>             }
>
> @@ -269,9 +256,9 @@
>                     sock.setReuseAddress(this.config.isSoReuseAddress());
>                     sock.bind(request.getLocalAddress());
>                 }
> +                prepareSocket(socketChannel.socket());
>                 final boolean connected =
> socketChannel.connect(request.getRemoteAddress());
>                 if (connected) {
> -                    prepareSocket(socketChannel.socket());
>                     final ChannelEntry entry = new
> ChannelEntry(socketChannel, request);
>                     addChannel(entry);
>                     return;
>
> Note that this change will require some more thorough review because
> the prepareSocket (in AbstractMultiworkerIOReactor) method is
> protected but not final. Therefore there could be code that overrides
> this method and that assumes that the socket is connected, which is no
> longer the case with may change.
>
> However, all httpcore unit tests still pass after that change, and
> there are also no failures in the integration tests in Synapse.
>
> Andreas
>
> On Sun, Nov 24, 2013 at 1:32 PM, Oleg Kalnichevski <ol...@apache.org>
> wrote:
>
> On Sun, 2013-11-24 at 13:02 +0100, Andreas Veithen wrote:
>
> All,
>
> While debugging this scenario (on Ubuntu with the default receive
> buffer size of 8192 and a payload of 1M), I noticed something else.
> Very early in the test execution, there are TCP retransmissions from
> the client to Synapse. This is of course weird and should not happen.
> While trying to understand why that occurs, I noticed that the TCP
> window size advertised by Synapse to the client is initially 43690,
> and then drops gradually to 8192. The latter value is expected because
> it corresponds to the receive buffer size. The question is why the TCP
> window is initially 43690.
>
> It turns out that this is because httpcore-nio sets the receive buffer
> size only on the sockets for new incoming connections (in
> AbstractMultiworkerIOReactor#prepareSocket), but not on the server
> socket itself [1]. Since the initial TCP window size is advertised in
> the SYN/ACK pac
>
> --
> Hiranya Jayathilaka
> Mayhem Lab/RACE Lab;
> Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
> E-mail: hiranya@cs.ucsb.edu <javascript:_e({}, 'cvml',
> 'hiranya@wso2.com');>;  Mobile: +1 (805) 895-7443
> Blog: http://techfeast-hiranya.blogspot.com
>
>

Re: HTTP Core Performance and Reactor Buffer Size

Posted by Hiranya Jayathilaka <hi...@gmail.com>.
Hi Andreas, Hi Oleg,

This is some excellent detective work :) Thanks for looking into this. I'm glad that we now have a better understanding of the issue and the problems are already being fixed in the httpcore trunk. 

In the mean time, what if we also fix Synapse to not set these buffer size values, unless the user has specifically requested them to be set? That is, if the following properties are not set, we can just let Synapse pick the system defaults instead of initializing them to 8k:

http.socket.rcv-buffer-size
http.socket.snd-buffer-size

WDYT?

Thanks,
Hiranya

On Nov 24, 2013, at 11:10 AM, Oleg Kalnichevski <ol...@apache.org> wrote:

> On Sun, 2013-11-24 at 17:29 +0100, Andreas Veithen wrote:
>> Oleg,
>> 
>> I had a closer look at the second part of the issue. The problem
>> actually occurs between Synapse and the back-end server. The situation
>> is quite similar to the first part of the issue: in the SYN packet
>> sent by Synapse to the back-end, the TCP window size is set to 43690,
>> and once the back-end starts sending the response, the window size
>> drops to 8192.
>> 
>> In this case, the problem is that httpcore-nio sets the receive buffer
>> size _after_ connecting. With the following patch, the problem
>> completely disappears:
>> 
> 
> Andreas,
> 
> I reviewed and committed the patch. While this change may theoretically
> break some applications I would consider such possibility pretty
> marginal.
> 
> Oleg 
> 
>> Index: httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
>> ===================================================================
>> --- httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
>> (revision 1544958)
>> +++ httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
>> (working copy)
>> @@ -176,21 +176,8 @@
>>                 }
>>                 key.cancel();
>>                 if (channel.isConnected()) {
>> -                    try {
>> -                        try {
>> -                            prepareSocket(channel.socket());
>> -                        } catch (final IOException ex) {
>> -                            if (this.exceptionHandler == null
>> -                                    || !this.exceptionHandler.handle(ex)) {
>> -                                throw new IOReactorException(
>> -                                        "Failure initalizing socket", ex);
>> -                            }
>> -                        }
>> -                        final ChannelEntry entry = new
>> ChannelEntry(channel, sessionRequest);
>> -                        addChannel(entry);
>> -                    } catch (final IOException ex) {
>> -                        sessionRequest.failed(ex);
>> -                    }
>> +                    final ChannelEntry entry = new
>> ChannelEntry(channel, sessionRequest);
>> +                    addChannel(entry);
>>                 }
>>             }
>> 
>> @@ -269,9 +256,9 @@
>>                     sock.setReuseAddress(this.config.isSoReuseAddress());
>>                     sock.bind(request.getLocalAddress());
>>                 }
>> +                prepareSocket(socketChannel.socket());
>>                 final boolean connected =
>> socketChannel.connect(request.getRemoteAddress());
>>                 if (connected) {
>> -                    prepareSocket(socketChannel.socket());
>>                     final ChannelEntry entry = new
>> ChannelEntry(socketChannel, request);
>>                     addChannel(entry);
>>                     return;
>> 
>> Note that this change will require some more thorough review because
>> the prepareSocket (in AbstractMultiworkerIOReactor) method is
>> protected but not final. Therefore there could be code that overrides
>> this method and that assumes that the socket is connected, which is no
>> longer the case with may change.
>> 
>> However, all httpcore unit tests still pass after that change, and
>> there are also no failures in the integration tests in Synapse.
>> 
>> Andreas
>> 
>> On Sun, Nov 24, 2013 at 1:32 PM, Oleg Kalnichevski <ol...@apache.org> wrote:
>>> On Sun, 2013-11-24 at 13:02 +0100, Andreas Veithen wrote:
>>>> All,
>>>> 
>>>> While debugging this scenario (on Ubuntu with the default receive
>>>> buffer size of 8192 and a payload of 1M), I noticed something else.
>>>> Very early in the test execution, there are TCP retransmissions from
>>>> the client to Synapse. This is of course weird and should not happen.
>>>> While trying to understand why that occurs, I noticed that the TCP
>>>> window size advertised by Synapse to the client is initially 43690,
>>>> and then drops gradually to 8192. The latter value is expected because
>>>> it corresponds to the receive buffer size. The question is why the TCP
>>>> window is initially 43690.
>>>> 
>>>> It turns out that this is because httpcore-nio sets the receive buffer
>>>> size only on the sockets for new incoming connections (in
>>>> AbstractMultiworkerIOReactor#prepareSocket), but not on the server
>>>> socket itself [1]. Since the initial TCP window size is advertised in
>>>> the SYN/ACK packet before the connection is accepted (and httpcore-nio
>>>> gets a chance to set the receive buffer size), it will be the default
>>>> receive buffer size, not 8192.
>>>> 
>>>> To fix this, I modified httpcore-nio as follows:
>>>> 
>>>> Index: httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
>>>> ===================================================================
>>>> --- httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
>>>> (revision 1544958)
>>>> +++ httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
>>>> (working copy)
>>>> @@ -233,6 +233,9 @@
>>>>             try {
>>>>                 final ServerSocket socket = serverChannel.socket();
>>>>                 socket.setReuseAddress(this.config.isSoReuseAddress());
>>>> +                if (this.config.getRcvBufSize() > 0) {
>>>> +                    socket.setReceiveBufferSize(this.config.getRcvBufSize());
>>>> +                }
>>>>                 serverChannel.configureBlocking(false);
>>>>                 socket.bind(address);
>>>>             } catch (final IOException ex) {
>>>> 
>>>> This fixes the TCP window and retransmission problem, and it also
>>>> appears to fix half of the overall issue: now transmitting the 1M
>>>> request payload only takes a few 100 milliseconds instead of 20
>>>> seconds. However, the issue still exists in the return path.
>>>> 
>>>> Andreas
>>>> 
>>> 
>>> Andreas
>>> 
>>> I committed the patch to SVN trunk. Please review.
>>> 
>>> Could you please elaborate what do you mean by 'issue still exists in
>>> the return path'. I am not sure I quite understand.
>>> 
>>> Oleg
>>> 
>>> 
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
>>> For additional commands, e-mail: dev-help@synapse.apache.org
>>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
>> For additional commands, e-mail: dev-help@hc.apache.org
>> 
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
> For additional commands, e-mail: dev-help@hc.apache.org
> 

--
Hiranya Jayathilaka
Mayhem Lab/RACE Lab;
Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
Blog: http://techfeast-hiranya.blogspot.com


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Sun, 2013-11-24 at 17:29 +0100, Andreas Veithen wrote:
> Oleg,
> 
> I had a closer look at the second part of the issue. The problem
> actually occurs between Synapse and the back-end server. The situation
> is quite similar to the first part of the issue: in the SYN packet
> sent by Synapse to the back-end, the TCP window size is set to 43690,
> and once the back-end starts sending the response, the window size
> drops to 8192.
> 
> In this case, the problem is that httpcore-nio sets the receive buffer
> size _after_ connecting. With the following patch, the problem
> completely disappears:
> 

Andreas,

I reviewed and committed the patch. While this change may theoretically
break some applications I would consider such possibility pretty
marginal.

Oleg 

> Index: httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
> ===================================================================
> --- httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
> (revision 1544958)
> +++ httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
> (working copy)
> @@ -176,21 +176,8 @@
>                  }
>                  key.cancel();
>                  if (channel.isConnected()) {
> -                    try {
> -                        try {
> -                            prepareSocket(channel.socket());
> -                        } catch (final IOException ex) {
> -                            if (this.exceptionHandler == null
> -                                    || !this.exceptionHandler.handle(ex)) {
> -                                throw new IOReactorException(
> -                                        "Failure initalizing socket", ex);
> -                            }
> -                        }
> -                        final ChannelEntry entry = new
> ChannelEntry(channel, sessionRequest);
> -                        addChannel(entry);
> -                    } catch (final IOException ex) {
> -                        sessionRequest.failed(ex);
> -                    }
> +                    final ChannelEntry entry = new
> ChannelEntry(channel, sessionRequest);
> +                    addChannel(entry);
>                  }
>              }
> 
> @@ -269,9 +256,9 @@
>                      sock.setReuseAddress(this.config.isSoReuseAddress());
>                      sock.bind(request.getLocalAddress());
>                  }
> +                prepareSocket(socketChannel.socket());
>                  final boolean connected =
> socketChannel.connect(request.getRemoteAddress());
>                  if (connected) {
> -                    prepareSocket(socketChannel.socket());
>                      final ChannelEntry entry = new
> ChannelEntry(socketChannel, request);
>                      addChannel(entry);
>                      return;
> 
> Note that this change will require some more thorough review because
> the prepareSocket (in AbstractMultiworkerIOReactor) method is
> protected but not final. Therefore there could be code that overrides
> this method and that assumes that the socket is connected, which is no
> longer the case with may change.
> 
> However, all httpcore unit tests still pass after that change, and
> there are also no failures in the integration tests in Synapse.
> 
> Andreas
> 
> On Sun, Nov 24, 2013 at 1:32 PM, Oleg Kalnichevski <ol...@apache.org> wrote:
> > On Sun, 2013-11-24 at 13:02 +0100, Andreas Veithen wrote:
> >> All,
> >>
> >> While debugging this scenario (on Ubuntu with the default receive
> >> buffer size of 8192 and a payload of 1M), I noticed something else.
> >> Very early in the test execution, there are TCP retransmissions from
> >> the client to Synapse. This is of course weird and should not happen.
> >> While trying to understand why that occurs, I noticed that the TCP
> >> window size advertised by Synapse to the client is initially 43690,
> >> and then drops gradually to 8192. The latter value is expected because
> >> it corresponds to the receive buffer size. The question is why the TCP
> >> window is initially 43690.
> >>
> >> It turns out that this is because httpcore-nio sets the receive buffer
> >> size only on the sockets for new incoming connections (in
> >> AbstractMultiworkerIOReactor#prepareSocket), but not on the server
> >> socket itself [1]. Since the initial TCP window size is advertised in
> >> the SYN/ACK packet before the connection is accepted (and httpcore-nio
> >> gets a chance to set the receive buffer size), it will be the default
> >> receive buffer size, not 8192.
> >>
> >> To fix this, I modified httpcore-nio as follows:
> >>
> >> Index: httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> >> ===================================================================
> >> --- httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> >> (revision 1544958)
> >> +++ httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> >> (working copy)
> >> @@ -233,6 +233,9 @@
> >>              try {
> >>                  final ServerSocket socket = serverChannel.socket();
> >>                  socket.setReuseAddress(this.config.isSoReuseAddress());
> >> +                if (this.config.getRcvBufSize() > 0) {
> >> +                    socket.setReceiveBufferSize(this.config.getRcvBufSize());
> >> +                }
> >>                  serverChannel.configureBlocking(false);
> >>                  socket.bind(address);
> >>              } catch (final IOException ex) {
> >>
> >> This fixes the TCP window and retransmission problem, and it also
> >> appears to fix half of the overall issue: now transmitting the 1M
> >> request payload only takes a few 100 milliseconds instead of 20
> >> seconds. However, the issue still exists in the return path.
> >>
> >> Andreas
> >>
> >
> > Andreas
> >
> > I committed the patch to SVN trunk. Please review.
> >
> > Could you please elaborate what do you mean by 'issue still exists in
> > the return path'. I am not sure I quite understand.
> >
> > Oleg
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
> > For additional commands, e-mail: dev-help@synapse.apache.org
> >
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
> For additional commands, e-mail: dev-help@hc.apache.org
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
For additional commands, e-mail: dev-help@hc.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Sun, 2013-11-24 at 17:29 +0100, Andreas Veithen wrote:
> Oleg,
> 
> I had a closer look at the second part of the issue. The problem
> actually occurs between Synapse and the back-end server. The situation
> is quite similar to the first part of the issue: in the SYN packet
> sent by Synapse to the back-end, the TCP window size is set to 43690,
> and once the back-end starts sending the response, the window size
> drops to 8192.
> 
> In this case, the problem is that httpcore-nio sets the receive buffer
> size _after_ connecting. With the following patch, the problem
> completely disappears:
> 

Andreas,

I reviewed and committed the patch. While this change may theoretically
break some applications I would consider such possibility pretty
marginal.

Oleg 

> Index: httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
> ===================================================================
> --- httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
> (revision 1544958)
> +++ httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
> (working copy)
> @@ -176,21 +176,8 @@
>                  }
>                  key.cancel();
>                  if (channel.isConnected()) {
> -                    try {
> -                        try {
> -                            prepareSocket(channel.socket());
> -                        } catch (final IOException ex) {
> -                            if (this.exceptionHandler == null
> -                                    || !this.exceptionHandler.handle(ex)) {
> -                                throw new IOReactorException(
> -                                        "Failure initalizing socket", ex);
> -                            }
> -                        }
> -                        final ChannelEntry entry = new
> ChannelEntry(channel, sessionRequest);
> -                        addChannel(entry);
> -                    } catch (final IOException ex) {
> -                        sessionRequest.failed(ex);
> -                    }
> +                    final ChannelEntry entry = new
> ChannelEntry(channel, sessionRequest);
> +                    addChannel(entry);
>                  }
>              }
> 
> @@ -269,9 +256,9 @@
>                      sock.setReuseAddress(this.config.isSoReuseAddress());
>                      sock.bind(request.getLocalAddress());
>                  }
> +                prepareSocket(socketChannel.socket());
>                  final boolean connected =
> socketChannel.connect(request.getRemoteAddress());
>                  if (connected) {
> -                    prepareSocket(socketChannel.socket());
>                      final ChannelEntry entry = new
> ChannelEntry(socketChannel, request);
>                      addChannel(entry);
>                      return;
> 
> Note that this change will require some more thorough review because
> the prepareSocket (in AbstractMultiworkerIOReactor) method is
> protected but not final. Therefore there could be code that overrides
> this method and that assumes that the socket is connected, which is no
> longer the case with may change.
> 
> However, all httpcore unit tests still pass after that change, and
> there are also no failures in the integration tests in Synapse.
> 
> Andreas
> 
> On Sun, Nov 24, 2013 at 1:32 PM, Oleg Kalnichevski <ol...@apache.org> wrote:
> > On Sun, 2013-11-24 at 13:02 +0100, Andreas Veithen wrote:
> >> All,
> >>
> >> While debugging this scenario (on Ubuntu with the default receive
> >> buffer size of 8192 and a payload of 1M), I noticed something else.
> >> Very early in the test execution, there are TCP retransmissions from
> >> the client to Synapse. This is of course weird and should not happen.
> >> While trying to understand why that occurs, I noticed that the TCP
> >> window size advertised by Synapse to the client is initially 43690,
> >> and then drops gradually to 8192. The latter value is expected because
> >> it corresponds to the receive buffer size. The question is why the TCP
> >> window is initially 43690.
> >>
> >> It turns out that this is because httpcore-nio sets the receive buffer
> >> size only on the sockets for new incoming connections (in
> >> AbstractMultiworkerIOReactor#prepareSocket), but not on the server
> >> socket itself [1]. Since the initial TCP window size is advertised in
> >> the SYN/ACK packet before the connection is accepted (and httpcore-nio
> >> gets a chance to set the receive buffer size), it will be the default
> >> receive buffer size, not 8192.
> >>
> >> To fix this, I modified httpcore-nio as follows:
> >>
> >> Index: httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> >> ===================================================================
> >> --- httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> >> (revision 1544958)
> >> +++ httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> >> (working copy)
> >> @@ -233,6 +233,9 @@
> >>              try {
> >>                  final ServerSocket socket = serverChannel.socket();
> >>                  socket.setReuseAddress(this.config.isSoReuseAddress());
> >> +                if (this.config.getRcvBufSize() > 0) {
> >> +                    socket.setReceiveBufferSize(this.config.getRcvBufSize());
> >> +                }
> >>                  serverChannel.configureBlocking(false);
> >>                  socket.bind(address);
> >>              } catch (final IOException ex) {
> >>
> >> This fixes the TCP window and retransmission problem, and it also
> >> appears to fix half of the overall issue: now transmitting the 1M
> >> request payload only takes a few 100 milliseconds instead of 20
> >> seconds. However, the issue still exists in the return path.
> >>
> >> Andreas
> >>
> >
> > Andreas
> >
> > I committed the patch to SVN trunk. Please review.
> >
> > Could you please elaborate what do you mean by 'issue still exists in
> > the return path'. I am not sure I quite understand.
> >
> > Oleg
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
> > For additional commands, e-mail: dev-help@synapse.apache.org
> >
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
> For additional commands, e-mail: dev-help@hc.apache.org
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
For additional commands, e-mail: dev-help@synapse.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Andreas Veithen <an...@gmail.com>.
Oleg,

I had a closer look at the second part of the issue. The problem
actually occurs between Synapse and the back-end server. The situation
is quite similar to the first part of the issue: in the SYN packet
sent by Synapse to the back-end, the TCP window size is set to 43690,
and once the back-end starts sending the response, the window size
drops to 8192.

In this case, the problem is that httpcore-nio sets the receive buffer
size _after_ connecting. With the following patch, the problem
completely disappears:

Index: httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
===================================================================
--- httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
(revision 1544958)
+++ httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
(working copy)
@@ -176,21 +176,8 @@
                 }
                 key.cancel();
                 if (channel.isConnected()) {
-                    try {
-                        try {
-                            prepareSocket(channel.socket());
-                        } catch (final IOException ex) {
-                            if (this.exceptionHandler == null
-                                    || !this.exceptionHandler.handle(ex)) {
-                                throw new IOReactorException(
-                                        "Failure initalizing socket", ex);
-                            }
-                        }
-                        final ChannelEntry entry = new
ChannelEntry(channel, sessionRequest);
-                        addChannel(entry);
-                    } catch (final IOException ex) {
-                        sessionRequest.failed(ex);
-                    }
+                    final ChannelEntry entry = new
ChannelEntry(channel, sessionRequest);
+                    addChannel(entry);
                 }
             }

@@ -269,9 +256,9 @@
                     sock.setReuseAddress(this.config.isSoReuseAddress());
                     sock.bind(request.getLocalAddress());
                 }
+                prepareSocket(socketChannel.socket());
                 final boolean connected =
socketChannel.connect(request.getRemoteAddress());
                 if (connected) {
-                    prepareSocket(socketChannel.socket());
                     final ChannelEntry entry = new
ChannelEntry(socketChannel, request);
                     addChannel(entry);
                     return;

Note that this change will require some more thorough review because
the prepareSocket (in AbstractMultiworkerIOReactor) method is
protected but not final. Therefore there could be code that overrides
this method and that assumes that the socket is connected, which is no
longer the case with may change.

However, all httpcore unit tests still pass after that change, and
there are also no failures in the integration tests in Synapse.

Andreas

On Sun, Nov 24, 2013 at 1:32 PM, Oleg Kalnichevski <ol...@apache.org> wrote:
> On Sun, 2013-11-24 at 13:02 +0100, Andreas Veithen wrote:
>> All,
>>
>> While debugging this scenario (on Ubuntu with the default receive
>> buffer size of 8192 and a payload of 1M), I noticed something else.
>> Very early in the test execution, there are TCP retransmissions from
>> the client to Synapse. This is of course weird and should not happen.
>> While trying to understand why that occurs, I noticed that the TCP
>> window size advertised by Synapse to the client is initially 43690,
>> and then drops gradually to 8192. The latter value is expected because
>> it corresponds to the receive buffer size. The question is why the TCP
>> window is initially 43690.
>>
>> It turns out that this is because httpcore-nio sets the receive buffer
>> size only on the sockets for new incoming connections (in
>> AbstractMultiworkerIOReactor#prepareSocket), but not on the server
>> socket itself [1]. Since the initial TCP window size is advertised in
>> the SYN/ACK packet before the connection is accepted (and httpcore-nio
>> gets a chance to set the receive buffer size), it will be the default
>> receive buffer size, not 8192.
>>
>> To fix this, I modified httpcore-nio as follows:
>>
>> Index: httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
>> ===================================================================
>> --- httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
>> (revision 1544958)
>> +++ httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
>> (working copy)
>> @@ -233,6 +233,9 @@
>>              try {
>>                  final ServerSocket socket = serverChannel.socket();
>>                  socket.setReuseAddress(this.config.isSoReuseAddress());
>> +                if (this.config.getRcvBufSize() > 0) {
>> +                    socket.setReceiveBufferSize(this.config.getRcvBufSize());
>> +                }
>>                  serverChannel.configureBlocking(false);
>>                  socket.bind(address);
>>              } catch (final IOException ex) {
>>
>> This fixes the TCP window and retransmission problem, and it also
>> appears to fix half of the overall issue: now transmitting the 1M
>> request payload only takes a few 100 milliseconds instead of 20
>> seconds. However, the issue still exists in the return path.
>>
>> Andreas
>>
>
> Andreas
>
> I committed the patch to SVN trunk. Please review.
>
> Could you please elaborate what do you mean by 'issue still exists in
> the return path'. I am not sure I quite understand.
>
> Oleg
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
> For additional commands, e-mail: dev-help@synapse.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
For additional commands, e-mail: dev-help@hc.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Andreas Veithen <an...@gmail.com>.
On Sun, Nov 24, 2013 at 5:40 PM, Andreas Veithen
<an...@gmail.com> wrote:
> What I still don't understand completely is why this causes such a
> slowdown. The effect of the issue in httpcore-nio is that the peer
> sees the TCP window size gradually drop from 43690 to 8192. Would that
> trigger some mechanism in the TCP stack of the peer that delays the
> transmission of TCP segments (even if the window is not 0)?

After reviewing some aspects of TCP, I think that the most likely
candidate to explain this behavior is actually the "silly window
syndrome" avoidance algorithm. Since the problem occurs in an
integration test, all communication goes over the loopback interface
where MSS=65495. This means that the window size gets much smaller
than both the MSS and the initial/maximum window size. Probably Linux
considers this as "silly" small and starts delaying transmission in an
attempt to allow the window to grow to a more reasonable size (which
of course never happens).

Andreas

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
For additional commands, e-mail: dev-help@synapse.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Andreas Veithen <an...@gmail.com>.
On Sun, Nov 24, 2013 at 5:40 PM, Andreas Veithen
<an...@gmail.com> wrote:
> What I still don't understand completely is why this causes such a
> slowdown. The effect of the issue in httpcore-nio is that the peer
> sees the TCP window size gradually drop from 43690 to 8192. Would that
> trigger some mechanism in the TCP stack of the peer that delays the
> transmission of TCP segments (even if the window is not 0)?

After reviewing some aspects of TCP, I think that the most likely
candidate to explain this behavior is actually the "silly window
syndrome" avoidance algorithm. Since the problem occurs in an
integration test, all communication goes over the loopback interface
where MSS=65495. This means that the window size gets much smaller
than both the MSS and the initial/maximum window size. Probably Linux
considers this as "silly" small and starts delaying transmission in an
attempt to allow the window to grow to a more reasonable size (which
of course never happens).

Andreas

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
For additional commands, e-mail: dev-help@hc.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Andreas Veithen <an...@gmail.com>.
On Sun, Nov 24, 2013 at 2:21 PM, Oleg Kalnichevski <ol...@apache.org> wrote:
> On Sun, 2013-11-24 at 14:15 +0100, Andreas Veithen wrote:
>> Instead of taking 40 seconds, the scenario now takes 20 seconds, with the
>> request processing completing in less than a second. So there is still a
>> problem, but I didn't get the time yet to debug further. I'll keep you
>> updated once I have more information.
>>
>> Andreas
>>
>
> I see. As said in my previous message messing with OS default settings
> for RCV/SND buffer size seems to cause only grief. 8K for RCV buffer
> sounds way too small.

Well, it causes grief because httpcore-nio doesn't set the receive
buffer size at the right moment. That being said, I agree that 8K
sounds too small.

What I still don't understand completely is why this causes such a
slowdown. The effect of the issue in httpcore-nio is that the peer
sees the TCP window size gradually drop from 43690 to 8192. Would that
trigger some mechanism in the TCP stack of the peer that delays the
transmission of TCP segments (even if the window is not 0)?

Andreas

> Oleg
>
>> On Sunday, November 24, 2013, Oleg Kalnichevski wrote:
>>
>> > On Sun, 2013-11-24 at 13:02 +0100, Andreas Veithen wrote:
>> > > All,
>> > >
>> > > While debugging this scenario (on Ubuntu with the default receive
>> > > buffer size of 8192 and a payload of 1M), I noticed something else.
>> > > Very early in the test execution, there are TCP retransmissions from
>> > > the client to Synapse. This is of course weird and should not happen.
>> > > While trying to understand why that occurs, I noticed that the TCP
>> > > window size advertised by Synapse to the client is initially 43690,
>> > > and then drops gradually to 8192. The latter value is expected because
>> > > it corresponds to the receive buffer size. The question is why the TCP
>> > > window is initially 43690.
>> > >
>> > > It turns out that this is because httpcore-nio sets the receive buffer
>> > > size only on the sockets for new incoming connections (in
>> > > AbstractMultiworkerIOReactor#prepareSocket), but not on the server
>> > > socket itself [1]. Since the initial TCP window size is advertised in
>> > > the SYN/ACK packet before the connection is accepted (and httpcore-nio
>> > > gets a chance to set the receive buffer size), it will be the default
>> > > receive buffer size, not 8192.
>> > >
>> > > To fix this, I modified httpcore-nio as follows:
>> > >
>> > > Index:
>> > httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
>> > > ===================================================================
>> > > ---
>> > httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
>> > > (revision 1544958)
>> > > +++
>> > httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
>> > > (working copy)
>> > > @@ -233,6 +233,9 @@
>> > >              try {
>> > >                  final ServerSocket socket = serverChannel.socket();
>> > >                  socket.setReuseAddress(this.config.isSoReuseAddress());
>> > > +                if (this.config.getRcvBufSize() > 0) {
>> > > +
>> >  socket.setReceiveBufferSize(this.config.getRcvBufSize());
>> > > +                }
>> > >                  serverChannel.configureBlocking(false);
>> > >                  socket.bind(address);
>> > >              } catch (final IOException ex) {
>> > >
>> > > This fixes the TCP window and retransmission problem, and it also
>> > > appears to fix half of the overall issue: now transmitting the 1M
>> > > request payload only takes a few 100 milliseconds instead of 20
>> > > seconds. However, the issue still exists in the return path.
>> > >
>> > > Andreas
>> > >
>> >
>> > Andreas
>> >
>> > I committed the patch to SVN trunk. Please review.
>> >
>> > Could you please elaborate what do you mean by 'issue still exists in
>> > the return path'. I am not sure I quite understand.
>> >
>> > Oleg
>> >
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org <javascript:;>
>> > For additional commands, e-mail: dev-help@synapse.apache.org<javascript:;>
>> >
>> >
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
> For additional commands, e-mail: dev-help@synapse.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
For additional commands, e-mail: dev-help@synapse.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Andreas Veithen <an...@gmail.com>.
On Sun, Nov 24, 2013 at 2:21 PM, Oleg Kalnichevski <ol...@apache.org> wrote:
> On Sun, 2013-11-24 at 14:15 +0100, Andreas Veithen wrote:
>> Instead of taking 40 seconds, the scenario now takes 20 seconds, with the
>> request processing completing in less than a second. So there is still a
>> problem, but I didn't get the time yet to debug further. I'll keep you
>> updated once I have more information.
>>
>> Andreas
>>
>
> I see. As said in my previous message messing with OS default settings
> for RCV/SND buffer size seems to cause only grief. 8K for RCV buffer
> sounds way too small.

Well, it causes grief because httpcore-nio doesn't set the receive
buffer size at the right moment. That being said, I agree that 8K
sounds too small.

What I still don't understand completely is why this causes such a
slowdown. The effect of the issue in httpcore-nio is that the peer
sees the TCP window size gradually drop from 43690 to 8192. Would that
trigger some mechanism in the TCP stack of the peer that delays the
transmission of TCP segments (even if the window is not 0)?

Andreas

> Oleg
>
>> On Sunday, November 24, 2013, Oleg Kalnichevski wrote:
>>
>> > On Sun, 2013-11-24 at 13:02 +0100, Andreas Veithen wrote:
>> > > All,
>> > >
>> > > While debugging this scenario (on Ubuntu with the default receive
>> > > buffer size of 8192 and a payload of 1M), I noticed something else.
>> > > Very early in the test execution, there are TCP retransmissions from
>> > > the client to Synapse. This is of course weird and should not happen.
>> > > While trying to understand why that occurs, I noticed that the TCP
>> > > window size advertised by Synapse to the client is initially 43690,
>> > > and then drops gradually to 8192. The latter value is expected because
>> > > it corresponds to the receive buffer size. The question is why the TCP
>> > > window is initially 43690.
>> > >
>> > > It turns out that this is because httpcore-nio sets the receive buffer
>> > > size only on the sockets for new incoming connections (in
>> > > AbstractMultiworkerIOReactor#prepareSocket), but not on the server
>> > > socket itself [1]. Since the initial TCP window size is advertised in
>> > > the SYN/ACK packet before the connection is accepted (and httpcore-nio
>> > > gets a chance to set the receive buffer size), it will be the default
>> > > receive buffer size, not 8192.
>> > >
>> > > To fix this, I modified httpcore-nio as follows:
>> > >
>> > > Index:
>> > httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
>> > > ===================================================================
>> > > ---
>> > httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
>> > > (revision 1544958)
>> > > +++
>> > httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
>> > > (working copy)
>> > > @@ -233,6 +233,9 @@
>> > >              try {
>> > >                  final ServerSocket socket = serverChannel.socket();
>> > >                  socket.setReuseAddress(this.config.isSoReuseAddress());
>> > > +                if (this.config.getRcvBufSize() > 0) {
>> > > +
>> >  socket.setReceiveBufferSize(this.config.getRcvBufSize());
>> > > +                }
>> > >                  serverChannel.configureBlocking(false);
>> > >                  socket.bind(address);
>> > >              } catch (final IOException ex) {
>> > >
>> > > This fixes the TCP window and retransmission problem, and it also
>> > > appears to fix half of the overall issue: now transmitting the 1M
>> > > request payload only takes a few 100 milliseconds instead of 20
>> > > seconds. However, the issue still exists in the return path.
>> > >
>> > > Andreas
>> > >
>> >
>> > Andreas
>> >
>> > I committed the patch to SVN trunk. Please review.
>> >
>> > Could you please elaborate what do you mean by 'issue still exists in
>> > the return path'. I am not sure I quite understand.
>> >
>> > Oleg
>> >
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org <javascript:;>
>> > For additional commands, e-mail: dev-help@synapse.apache.org<javascript:;>
>> >
>> >
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
> For additional commands, e-mail: dev-help@synapse.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
For additional commands, e-mail: dev-help@hc.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Sun, 2013-11-24 at 14:15 +0100, Andreas Veithen wrote:
> Instead of taking 40 seconds, the scenario now takes 20 seconds, with the
> request processing completing in less than a second. So there is still a
> problem, but I didn't get the time yet to debug further. I'll keep you
> updated once I have more information.
> 
> Andreas
> 

I see. As said in my previous message messing with OS default settings
for RCV/SND buffer size seems to cause only grief. 8K for RCV buffer
sounds way too small.

Oleg

> On Sunday, November 24, 2013, Oleg Kalnichevski wrote:
> 
> > On Sun, 2013-11-24 at 13:02 +0100, Andreas Veithen wrote:
> > > All,
> > >
> > > While debugging this scenario (on Ubuntu with the default receive
> > > buffer size of 8192 and a payload of 1M), I noticed something else.
> > > Very early in the test execution, there are TCP retransmissions from
> > > the client to Synapse. This is of course weird and should not happen.
> > > While trying to understand why that occurs, I noticed that the TCP
> > > window size advertised by Synapse to the client is initially 43690,
> > > and then drops gradually to 8192. The latter value is expected because
> > > it corresponds to the receive buffer size. The question is why the TCP
> > > window is initially 43690.
> > >
> > > It turns out that this is because httpcore-nio sets the receive buffer
> > > size only on the sockets for new incoming connections (in
> > > AbstractMultiworkerIOReactor#prepareSocket), but not on the server
> > > socket itself [1]. Since the initial TCP window size is advertised in
> > > the SYN/ACK packet before the connection is accepted (and httpcore-nio
> > > gets a chance to set the receive buffer size), it will be the default
> > > receive buffer size, not 8192.
> > >
> > > To fix this, I modified httpcore-nio as follows:
> > >
> > > Index:
> > httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> > > ===================================================================
> > > ---
> > httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> > > (revision 1544958)
> > > +++
> > httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> > > (working copy)
> > > @@ -233,6 +233,9 @@
> > >              try {
> > >                  final ServerSocket socket = serverChannel.socket();
> > >                  socket.setReuseAddress(this.config.isSoReuseAddress());
> > > +                if (this.config.getRcvBufSize() > 0) {
> > > +
> >  socket.setReceiveBufferSize(this.config.getRcvBufSize());
> > > +                }
> > >                  serverChannel.configureBlocking(false);
> > >                  socket.bind(address);
> > >              } catch (final IOException ex) {
> > >
> > > This fixes the TCP window and retransmission problem, and it also
> > > appears to fix half of the overall issue: now transmitting the 1M
> > > request payload only takes a few 100 milliseconds instead of 20
> > > seconds. However, the issue still exists in the return path.
> > >
> > > Andreas
> > >
> >
> > Andreas
> >
> > I committed the patch to SVN trunk. Please review.
> >
> > Could you please elaborate what do you mean by 'issue still exists in
> > the return path'. I am not sure I quite understand.
> >
> > Oleg
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org <javascript:;>
> > For additional commands, e-mail: dev-help@synapse.apache.org<javascript:;>
> >
> >



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
For additional commands, e-mail: dev-help@hc.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Sun, 2013-11-24 at 14:15 +0100, Andreas Veithen wrote:
> Instead of taking 40 seconds, the scenario now takes 20 seconds, with the
> request processing completing in less than a second. So there is still a
> problem, but I didn't get the time yet to debug further. I'll keep you
> updated once I have more information.
> 
> Andreas
> 

I see. As said in my previous message messing with OS default settings
for RCV/SND buffer size seems to cause only grief. 8K for RCV buffer
sounds way too small.

Oleg

> On Sunday, November 24, 2013, Oleg Kalnichevski wrote:
> 
> > On Sun, 2013-11-24 at 13:02 +0100, Andreas Veithen wrote:
> > > All,
> > >
> > > While debugging this scenario (on Ubuntu with the default receive
> > > buffer size of 8192 and a payload of 1M), I noticed something else.
> > > Very early in the test execution, there are TCP retransmissions from
> > > the client to Synapse. This is of course weird and should not happen.
> > > While trying to understand why that occurs, I noticed that the TCP
> > > window size advertised by Synapse to the client is initially 43690,
> > > and then drops gradually to 8192. The latter value is expected because
> > > it corresponds to the receive buffer size. The question is why the TCP
> > > window is initially 43690.
> > >
> > > It turns out that this is because httpcore-nio sets the receive buffer
> > > size only on the sockets for new incoming connections (in
> > > AbstractMultiworkerIOReactor#prepareSocket), but not on the server
> > > socket itself [1]. Since the initial TCP window size is advertised in
> > > the SYN/ACK packet before the connection is accepted (and httpcore-nio
> > > gets a chance to set the receive buffer size), it will be the default
> > > receive buffer size, not 8192.
> > >
> > > To fix this, I modified httpcore-nio as follows:
> > >
> > > Index:
> > httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> > > ===================================================================
> > > ---
> > httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> > > (revision 1544958)
> > > +++
> > httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> > > (working copy)
> > > @@ -233,6 +233,9 @@
> > >              try {
> > >                  final ServerSocket socket = serverChannel.socket();
> > >                  socket.setReuseAddress(this.config.isSoReuseAddress());
> > > +                if (this.config.getRcvBufSize() > 0) {
> > > +
> >  socket.setReceiveBufferSize(this.config.getRcvBufSize());
> > > +                }
> > >                  serverChannel.configureBlocking(false);
> > >                  socket.bind(address);
> > >              } catch (final IOException ex) {
> > >
> > > This fixes the TCP window and retransmission problem, and it also
> > > appears to fix half of the overall issue: now transmitting the 1M
> > > request payload only takes a few 100 milliseconds instead of 20
> > > seconds. However, the issue still exists in the return path.
> > >
> > > Andreas
> > >
> >
> > Andreas
> >
> > I committed the patch to SVN trunk. Please review.
> >
> > Could you please elaborate what do you mean by 'issue still exists in
> > the return path'. I am not sure I quite understand.
> >
> > Oleg
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org <javascript:;>
> > For additional commands, e-mail: dev-help@synapse.apache.org<javascript:;>
> >
> >



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
For additional commands, e-mail: dev-help@synapse.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Andreas Veithen <an...@gmail.com>.
Instead of taking 40 seconds, the scenario now takes 20 seconds, with the
request processing completing in less than a second. So there is still a
problem, but I didn't get the time yet to debug further. I'll keep you
updated once I have more information.

Andreas

On Sunday, November 24, 2013, Oleg Kalnichevski wrote:

> On Sun, 2013-11-24 at 13:02 +0100, Andreas Veithen wrote:
> > All,
> >
> > While debugging this scenario (on Ubuntu with the default receive
> > buffer size of 8192 and a payload of 1M), I noticed something else.
> > Very early in the test execution, there are TCP retransmissions from
> > the client to Synapse. This is of course weird and should not happen.
> > While trying to understand why that occurs, I noticed that the TCP
> > window size advertised by Synapse to the client is initially 43690,
> > and then drops gradually to 8192. The latter value is expected because
> > it corresponds to the receive buffer size. The question is why the TCP
> > window is initially 43690.
> >
> > It turns out that this is because httpcore-nio sets the receive buffer
> > size only on the sockets for new incoming connections (in
> > AbstractMultiworkerIOReactor#prepareSocket), but not on the server
> > socket itself [1]. Since the initial TCP window size is advertised in
> > the SYN/ACK packet before the connection is accepted (and httpcore-nio
> > gets a chance to set the receive buffer size), it will be the default
> > receive buffer size, not 8192.
> >
> > To fix this, I modified httpcore-nio as follows:
> >
> > Index:
> httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> > ===================================================================
> > ---
> httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> > (revision 1544958)
> > +++
> httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> > (working copy)
> > @@ -233,6 +233,9 @@
> >              try {
> >                  final ServerSocket socket = serverChannel.socket();
> >                  socket.setReuseAddress(this.config.isSoReuseAddress());
> > +                if (this.config.getRcvBufSize() > 0) {
> > +
>  socket.setReceiveBufferSize(this.config.getRcvBufSize());
> > +                }
> >                  serverChannel.configureBlocking(false);
> >                  socket.bind(address);
> >              } catch (final IOException ex) {
> >
> > This fixes the TCP window and retransmission problem, and it also
> > appears to fix half of the overall issue: now transmitting the 1M
> > request payload only takes a few 100 milliseconds instead of 20
> > seconds. However, the issue still exists in the return path.
> >
> > Andreas
> >
>
> Andreas
>
> I committed the patch to SVN trunk. Please review.
>
> Could you please elaborate what do you mean by 'issue still exists in
> the return path'. I am not sure I quite understand.
>
> Oleg
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org <javascript:;>
> For additional commands, e-mail: dev-help@synapse.apache.org<javascript:;>
>
>

Re: HTTP Core Performance and Reactor Buffer Size

Posted by Andreas Veithen <an...@gmail.com>.
Oleg,

I had a closer look at the second part of the issue. The problem
actually occurs between Synapse and the back-end server. The situation
is quite similar to the first part of the issue: in the SYN packet
sent by Synapse to the back-end, the TCP window size is set to 43690,
and once the back-end starts sending the response, the window size
drops to 8192.

In this case, the problem is that httpcore-nio sets the receive buffer
size _after_ connecting. With the following patch, the problem
completely disappears:

Index: httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
===================================================================
--- httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
(revision 1544958)
+++ httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultConnectingIOReactor.java
(working copy)
@@ -176,21 +176,8 @@
                 }
                 key.cancel();
                 if (channel.isConnected()) {
-                    try {
-                        try {
-                            prepareSocket(channel.socket());
-                        } catch (final IOException ex) {
-                            if (this.exceptionHandler == null
-                                    || !this.exceptionHandler.handle(ex)) {
-                                throw new IOReactorException(
-                                        "Failure initalizing socket", ex);
-                            }
-                        }
-                        final ChannelEntry entry = new
ChannelEntry(channel, sessionRequest);
-                        addChannel(entry);
-                    } catch (final IOException ex) {
-                        sessionRequest.failed(ex);
-                    }
+                    final ChannelEntry entry = new
ChannelEntry(channel, sessionRequest);
+                    addChannel(entry);
                 }
             }

@@ -269,9 +256,9 @@
                     sock.setReuseAddress(this.config.isSoReuseAddress());
                     sock.bind(request.getLocalAddress());
                 }
+                prepareSocket(socketChannel.socket());
                 final boolean connected =
socketChannel.connect(request.getRemoteAddress());
                 if (connected) {
-                    prepareSocket(socketChannel.socket());
                     final ChannelEntry entry = new
ChannelEntry(socketChannel, request);
                     addChannel(entry);
                     return;

Note that this change will require some more thorough review because
the prepareSocket (in AbstractMultiworkerIOReactor) method is
protected but not final. Therefore there could be code that overrides
this method and that assumes that the socket is connected, which is no
longer the case with may change.

However, all httpcore unit tests still pass after that change, and
there are also no failures in the integration tests in Synapse.

Andreas

On Sun, Nov 24, 2013 at 1:32 PM, Oleg Kalnichevski <ol...@apache.org> wrote:
> On Sun, 2013-11-24 at 13:02 +0100, Andreas Veithen wrote:
>> All,
>>
>> While debugging this scenario (on Ubuntu with the default receive
>> buffer size of 8192 and a payload of 1M), I noticed something else.
>> Very early in the test execution, there are TCP retransmissions from
>> the client to Synapse. This is of course weird and should not happen.
>> While trying to understand why that occurs, I noticed that the TCP
>> window size advertised by Synapse to the client is initially 43690,
>> and then drops gradually to 8192. The latter value is expected because
>> it corresponds to the receive buffer size. The question is why the TCP
>> window is initially 43690.
>>
>> It turns out that this is because httpcore-nio sets the receive buffer
>> size only on the sockets for new incoming connections (in
>> AbstractMultiworkerIOReactor#prepareSocket), but not on the server
>> socket itself [1]. Since the initial TCP window size is advertised in
>> the SYN/ACK packet before the connection is accepted (and httpcore-nio
>> gets a chance to set the receive buffer size), it will be the default
>> receive buffer size, not 8192.
>>
>> To fix this, I modified httpcore-nio as follows:
>>
>> Index: httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
>> ===================================================================
>> --- httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
>> (revision 1544958)
>> +++ httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
>> (working copy)
>> @@ -233,6 +233,9 @@
>>              try {
>>                  final ServerSocket socket = serverChannel.socket();
>>                  socket.setReuseAddress(this.config.isSoReuseAddress());
>> +                if (this.config.getRcvBufSize() > 0) {
>> +                    socket.setReceiveBufferSize(this.config.getRcvBufSize());
>> +                }
>>                  serverChannel.configureBlocking(false);
>>                  socket.bind(address);
>>              } catch (final IOException ex) {
>>
>> This fixes the TCP window and retransmission problem, and it also
>> appears to fix half of the overall issue: now transmitting the 1M
>> request payload only takes a few 100 milliseconds instead of 20
>> seconds. However, the issue still exists in the return path.
>>
>> Andreas
>>
>
> Andreas
>
> I committed the patch to SVN trunk. Please review.
>
> Could you please elaborate what do you mean by 'issue still exists in
> the return path'. I am not sure I quite understand.
>
> Oleg
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
> For additional commands, e-mail: dev-help@synapse.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
For additional commands, e-mail: dev-help@synapse.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Andreas Veithen <an...@gmail.com>.
Instead of taking 40 seconds, the scenario now takes 20 seconds, with the
request processing completing in less than a second. So there is still a
problem, but I didn't get the time yet to debug further. I'll keep you
updated once I have more information.

Andreas

On Sunday, November 24, 2013, Oleg Kalnichevski wrote:

> On Sun, 2013-11-24 at 13:02 +0100, Andreas Veithen wrote:
> > All,
> >
> > While debugging this scenario (on Ubuntu with the default receive
> > buffer size of 8192 and a payload of 1M), I noticed something else.
> > Very early in the test execution, there are TCP retransmissions from
> > the client to Synapse. This is of course weird and should not happen.
> > While trying to understand why that occurs, I noticed that the TCP
> > window size advertised by Synapse to the client is initially 43690,
> > and then drops gradually to 8192. The latter value is expected because
> > it corresponds to the receive buffer size. The question is why the TCP
> > window is initially 43690.
> >
> > It turns out that this is because httpcore-nio sets the receive buffer
> > size only on the sockets for new incoming connections (in
> > AbstractMultiworkerIOReactor#prepareSocket), but not on the server
> > socket itself [1]. Since the initial TCP window size is advertised in
> > the SYN/ACK packet before the connection is accepted (and httpcore-nio
> > gets a chance to set the receive buffer size), it will be the default
> > receive buffer size, not 8192.
> >
> > To fix this, I modified httpcore-nio as follows:
> >
> > Index:
> httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> > ===================================================================
> > ---
> httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> > (revision 1544958)
> > +++
> httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> > (working copy)
> > @@ -233,6 +233,9 @@
> >              try {
> >                  final ServerSocket socket = serverChannel.socket();
> >                  socket.setReuseAddress(this.config.isSoReuseAddress());
> > +                if (this.config.getRcvBufSize() > 0) {
> > +
>  socket.setReceiveBufferSize(this.config.getRcvBufSize());
> > +                }
> >                  serverChannel.configureBlocking(false);
> >                  socket.bind(address);
> >              } catch (final IOException ex) {
> >
> > This fixes the TCP window and retransmission problem, and it also
> > appears to fix half of the overall issue: now transmitting the 1M
> > request payload only takes a few 100 milliseconds instead of 20
> > seconds. However, the issue still exists in the return path.
> >
> > Andreas
> >
>
> Andreas
>
> I committed the patch to SVN trunk. Please review.
>
> Could you please elaborate what do you mean by 'issue still exists in
> the return path'. I am not sure I quite understand.
>
> Oleg
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org <javascript:;>
> For additional commands, e-mail: dev-help@synapse.apache.org<javascript:;>
>
>

Re: HTTP Core Performance and Reactor Buffer Size

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Sun, 2013-11-24 at 13:02 +0100, Andreas Veithen wrote:
> All,
> 
> While debugging this scenario (on Ubuntu with the default receive
> buffer size of 8192 and a payload of 1M), I noticed something else.
> Very early in the test execution, there are TCP retransmissions from
> the client to Synapse. This is of course weird and should not happen.
> While trying to understand why that occurs, I noticed that the TCP
> window size advertised by Synapse to the client is initially 43690,
> and then drops gradually to 8192. The latter value is expected because
> it corresponds to the receive buffer size. The question is why the TCP
> window is initially 43690.
> 
> It turns out that this is because httpcore-nio sets the receive buffer
> size only on the sockets for new incoming connections (in
> AbstractMultiworkerIOReactor#prepareSocket), but not on the server
> socket itself [1]. Since the initial TCP window size is advertised in
> the SYN/ACK packet before the connection is accepted (and httpcore-nio
> gets a chance to set the receive buffer size), it will be the default
> receive buffer size, not 8192.
> 
> To fix this, I modified httpcore-nio as follows:
> 
> Index: httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> ===================================================================
> --- httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> (revision 1544958)
> +++ httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> (working copy)
> @@ -233,6 +233,9 @@
>              try {
>                  final ServerSocket socket = serverChannel.socket();
>                  socket.setReuseAddress(this.config.isSoReuseAddress());
> +                if (this.config.getRcvBufSize() > 0) {
> +                    socket.setReceiveBufferSize(this.config.getRcvBufSize());
> +                }
>                  serverChannel.configureBlocking(false);
>                  socket.bind(address);
>              } catch (final IOException ex) {
> 
> This fixes the TCP window and retransmission problem, and it also
> appears to fix half of the overall issue: now transmitting the 1M
> request payload only takes a few 100 milliseconds instead of 20
> seconds. However, the issue still exists in the return path.
> 
> Andreas
> 

Andreas

I committed the patch to SVN trunk. Please review.

Could you please elaborate what do you mean by 'issue still exists in
the return path'. I am not sure I quite understand.

Oleg


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
For additional commands, e-mail: dev-help@hc.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Sun, 2013-11-24 at 13:02 +0100, Andreas Veithen wrote:
> All,
> 
> While debugging this scenario (on Ubuntu with the default receive
> buffer size of 8192 and a payload of 1M), I noticed something else.
> Very early in the test execution, there are TCP retransmissions from
> the client to Synapse. This is of course weird and should not happen.
> While trying to understand why that occurs, I noticed that the TCP
> window size advertised by Synapse to the client is initially 43690,
> and then drops gradually to 8192. The latter value is expected because
> it corresponds to the receive buffer size. The question is why the TCP
> window is initially 43690.
> 
> It turns out that this is because httpcore-nio sets the receive buffer
> size only on the sockets for new incoming connections (in
> AbstractMultiworkerIOReactor#prepareSocket), but not on the server
> socket itself [1]. Since the initial TCP window size is advertised in
> the SYN/ACK packet before the connection is accepted (and httpcore-nio
> gets a chance to set the receive buffer size), it will be the default
> receive buffer size, not 8192.
> 
> To fix this, I modified httpcore-nio as follows:
> 
> Index: httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> ===================================================================
> --- httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> (revision 1544958)
> +++ httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
> (working copy)
> @@ -233,6 +233,9 @@
>              try {
>                  final ServerSocket socket = serverChannel.socket();
>                  socket.setReuseAddress(this.config.isSoReuseAddress());
> +                if (this.config.getRcvBufSize() > 0) {
> +                    socket.setReceiveBufferSize(this.config.getRcvBufSize());
> +                }
>                  serverChannel.configureBlocking(false);
>                  socket.bind(address);
>              } catch (final IOException ex) {
> 
> This fixes the TCP window and retransmission problem, and it also
> appears to fix half of the overall issue: now transmitting the 1M
> request payload only takes a few 100 milliseconds instead of 20
> seconds. However, the issue still exists in the return path.
> 
> Andreas
> 

Andreas

I committed the patch to SVN trunk. Please review.

Could you please elaborate what do you mean by 'issue still exists in
the return path'. I am not sure I quite understand.

Oleg


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
For additional commands, e-mail: dev-help@synapse.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Andreas Veithen <an...@gmail.com>.
All,

While debugging this scenario (on Ubuntu with the default receive
buffer size of 8192 and a payload of 1M), I noticed something else.
Very early in the test execution, there are TCP retransmissions from
the client to Synapse. This is of course weird and should not happen.
While trying to understand why that occurs, I noticed that the TCP
window size advertised by Synapse to the client is initially 43690,
and then drops gradually to 8192. The latter value is expected because
it corresponds to the receive buffer size. The question is why the TCP
window is initially 43690.

It turns out that this is because httpcore-nio sets the receive buffer
size only on the sockets for new incoming connections (in
AbstractMultiworkerIOReactor#prepareSocket), but not on the server
socket itself [1]. Since the initial TCP window size is advertised in
the SYN/ACK packet before the connection is accepted (and httpcore-nio
gets a chance to set the receive buffer size), it will be the default
receive buffer size, not 8192.

To fix this, I modified httpcore-nio as follows:

Index: httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
===================================================================
--- httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
(revision 1544958)
+++ httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
(working copy)
@@ -233,6 +233,9 @@
             try {
                 final ServerSocket socket = serverChannel.socket();
                 socket.setReuseAddress(this.config.isSoReuseAddress());
+                if (this.config.getRcvBufSize() > 0) {
+                    socket.setReceiveBufferSize(this.config.getRcvBufSize());
+                }
                 serverChannel.configureBlocking(false);
                 socket.bind(address);
             } catch (final IOException ex) {

This fixes the TCP window and retransmission problem, and it also
appears to fix half of the overall issue: now transmitting the 1M
request payload only takes a few 100 milliseconds instead of 20
seconds. However, the issue still exists in the return path.

Andreas

[1] http://docs.oracle.com/javase/7/docs/api/java/net/ServerSocket.html#setReceiveBufferSize(int)

On Thu, Nov 21, 2013 at 9:08 PM, Hiranya Jayathilaka
<hi...@gmail.com> wrote:
> Hi Devs,
>
> I just found out that the performance of the Synapse Pass Through transport
> is highly sensitive to the RcvBufferSize of the IO reactors (especially when
> mediating very large messages). Here are some test results. In this case,
> I'm simply passing through a 1M message through Synapse to a backend server,
> which simply echoes it back to the client. Notice how the execution time of
> the scenario varies with the RcvBufferSize of the IO reactors.
>
> RcvBufferSize (in bytes)                  Scenario Execution Time (in
> seconds)
> ========================================================
> 8192 (Synapse default)                    25.9
> 16384                                                   0.4
> 32768                                                   0.2
>
> Is this behavior normal? If so does it make sense to change the Synapse
> default buffer size to something larger (e.g. 16k)?
>
> Interestingly I see this difference in behavior on Linux only. I cannot see
> a significant change in behavior on Mac.
>
> Appreciate your thoughts on this.
>
> Thanks,
> Hiranya
>
> --
> Hiranya Jayathilaka
> Mayhem Lab/RACE Lab;
> Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
> E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
> Blog: http://techfeast-hiranya.blogspot.com
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
For additional commands, e-mail: dev-help@synapse.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Andreas Veithen <an...@gmail.com>.
All,

While debugging this scenario (on Ubuntu with the default receive
buffer size of 8192 and a payload of 1M), I noticed something else.
Very early in the test execution, there are TCP retransmissions from
the client to Synapse. This is of course weird and should not happen.
While trying to understand why that occurs, I noticed that the TCP
window size advertised by Synapse to the client is initially 43690,
and then drops gradually to 8192. The latter value is expected because
it corresponds to the receive buffer size. The question is why the TCP
window is initially 43690.

It turns out that this is because httpcore-nio sets the receive buffer
size only on the sockets for new incoming connections (in
AbstractMultiworkerIOReactor#prepareSocket), but not on the server
socket itself [1]. Since the initial TCP window size is advertised in
the SYN/ACK packet before the connection is accepted (and httpcore-nio
gets a chance to set the receive buffer size), it will be the default
receive buffer size, not 8192.

To fix this, I modified httpcore-nio as follows:

Index: httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
===================================================================
--- httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
(revision 1544958)
+++ httpcore-nio/src/main/java/org/apache/http/impl/nio/reactor/DefaultListeningIOReactor.java
(working copy)
@@ -233,6 +233,9 @@
             try {
                 final ServerSocket socket = serverChannel.socket();
                 socket.setReuseAddress(this.config.isSoReuseAddress());
+                if (this.config.getRcvBufSize() > 0) {
+                    socket.setReceiveBufferSize(this.config.getRcvBufSize());
+                }
                 serverChannel.configureBlocking(false);
                 socket.bind(address);
             } catch (final IOException ex) {

This fixes the TCP window and retransmission problem, and it also
appears to fix half of the overall issue: now transmitting the 1M
request payload only takes a few 100 milliseconds instead of 20
seconds. However, the issue still exists in the return path.

Andreas

[1] http://docs.oracle.com/javase/7/docs/api/java/net/ServerSocket.html#setReceiveBufferSize(int)

On Thu, Nov 21, 2013 at 9:08 PM, Hiranya Jayathilaka
<hi...@gmail.com> wrote:
> Hi Devs,
>
> I just found out that the performance of the Synapse Pass Through transport
> is highly sensitive to the RcvBufferSize of the IO reactors (especially when
> mediating very large messages). Here are some test results. In this case,
> I'm simply passing through a 1M message through Synapse to a backend server,
> which simply echoes it back to the client. Notice how the execution time of
> the scenario varies with the RcvBufferSize of the IO reactors.
>
> RcvBufferSize (in bytes)                  Scenario Execution Time (in
> seconds)
> ========================================================
> 8192 (Synapse default)                    25.9
> 16384                                                   0.4
> 32768                                                   0.2
>
> Is this behavior normal? If so does it make sense to change the Synapse
> default buffer size to something larger (e.g. 16k)?
>
> Interestingly I see this difference in behavior on Linux only. I cannot see
> a significant change in behavior on Mac.
>
> Appreciate your thoughts on this.
>
> Thanks,
> Hiranya
>
> --
> Hiranya Jayathilaka
> Mayhem Lab/RACE Lab;
> Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
> E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
> Blog: http://techfeast-hiranya.blogspot.com
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
For additional commands, e-mail: dev-help@hc.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Thu, 2013-11-21 at 12:08 -0800, Hiranya Jayathilaka wrote:
> Hi Devs,
> 
> I just found out that the performance of the Synapse Pass Through transport is highly sensitive to the RcvBufferSize of the IO reactors (especially when mediating very large messages). Here are some test results. In this case, I'm simply passing through a 1M message through Synapse to a backend server, which simply echoes it back to the client. Notice how the execution time of the scenario varies with the RcvBufferSize of the IO reactors.
> 
> RcvBufferSize (in bytes)                  Scenario Execution Time (in seconds)
> ========================================================
> 8192 (Synapse default)                    25.9
> 16384                                                   0.4
> 32768                                                   0.2
> 

Hiranya

After experimenting extensively with RCV/SND buffer settings (with mixed
results) my personal conclusion was that those settings were better left
alone (set to their system default values). 

I am however not a TCP/IP specialist by any stretch of imagination, so I
may be missing something important.

Oleg 

> Is this behavior normal? If so does it make sense to change the Synapse default buffer size to something larger (e.g. 16k)?
> 
> Interestingly I see this difference in behavior on Linux only. I cannot see a significant change in behavior on Mac. 
> 
> Appreciate your thoughts on this.
> 
> Thanks,
> Hiranya
> 
> --
> Hiranya Jayathilaka
> Mayhem Lab/RACE Lab;
> Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
> E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
> Blog: http://techfeast-hiranya.blogspot.com
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
For additional commands, e-mail: dev-help@hc.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Hiranya Jayathilaka <hi...@gmail.com>.
On Nov 21, 2013, at 3:41 PM, Hiranya Jayathilaka <hi...@gmail.com> wrote:

> In the same manner, performance is also sensitive to the buffer size of ConnectionConfig.

Actually I was wrong. It's only sensitive to the RcvBufferSize on ReactorConfig.

> 
> Thanks,
> Hiranya
> 
> On Nov 21, 2013, at 12:08 PM, Hiranya Jayathilaka <hi...@gmail.com> wrote:
> 
>> Hi Devs,
>> 
>> I just found out that the performance of the Synapse Pass Through transport is highly sensitive to the RcvBufferSize of the IO reactors (especially when mediating very large messages). Here are some test results. In this case, I'm simply passing through a 1M message through Synapse to a backend server, which simply echoes it back to the client. Notice how the execution time of the scenario varies with the RcvBufferSize of the IO reactors.
>> 
>> RcvBufferSize (in bytes)                  Scenario Execution Time (in seconds)
>> ========================================================
>> 8192 (Synapse default)                    25.9
>> 16384                                                   0.4
>> 32768                                                   0.2
>> 
>> Is this behavior normal? If so does it make sense to change the Synapse default buffer size to something larger (e.g. 16k)?
>> 
>> Interestingly I see this difference in behavior on Linux only. I cannot see a significant change in behavior on Mac. 
>> 
>> Appreciate your thoughts on this.
>> 
>> Thanks,
>> Hiranya
>> 
>> --
>> Hiranya Jayathilaka
>> Mayhem Lab/RACE Lab;
>> Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
>> E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
>> Blog: http://techfeast-hiranya.blogspot.com
>> 
> 
> --
> Hiranya Jayathilaka
> Mayhem Lab/RACE Lab;
> Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
> E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
> Blog: http://techfeast-hiranya.blogspot.com
> 

--
Hiranya Jayathilaka
Mayhem Lab/RACE Lab;
Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
Blog: http://techfeast-hiranya.blogspot.com


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Hiranya Jayathilaka <hi...@gmail.com>.
On Nov 21, 2013, at 3:41 PM, Hiranya Jayathilaka <hi...@gmail.com> wrote:

> In the same manner, performance is also sensitive to the buffer size of ConnectionConfig.

Actually I was wrong. It's only sensitive to the RcvBufferSize on ReactorConfig.

> 
> Thanks,
> Hiranya
> 
> On Nov 21, 2013, at 12:08 PM, Hiranya Jayathilaka <hi...@gmail.com> wrote:
> 
>> Hi Devs,
>> 
>> I just found out that the performance of the Synapse Pass Through transport is highly sensitive to the RcvBufferSize of the IO reactors (especially when mediating very large messages). Here are some test results. In this case, I'm simply passing through a 1M message through Synapse to a backend server, which simply echoes it back to the client. Notice how the execution time of the scenario varies with the RcvBufferSize of the IO reactors.
>> 
>> RcvBufferSize (in bytes)                  Scenario Execution Time (in seconds)
>> ========================================================
>> 8192 (Synapse default)                    25.9
>> 16384                                                   0.4
>> 32768                                                   0.2
>> 
>> Is this behavior normal? If so does it make sense to change the Synapse default buffer size to something larger (e.g. 16k)?
>> 
>> Interestingly I see this difference in behavior on Linux only. I cannot see a significant change in behavior on Mac. 
>> 
>> Appreciate your thoughts on this.
>> 
>> Thanks,
>> Hiranya
>> 
>> --
>> Hiranya Jayathilaka
>> Mayhem Lab/RACE Lab;
>> Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
>> E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
>> Blog: http://techfeast-hiranya.blogspot.com
>> 
> 
> --
> Hiranya Jayathilaka
> Mayhem Lab/RACE Lab;
> Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
> E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
> Blog: http://techfeast-hiranya.blogspot.com
> 

--
Hiranya Jayathilaka
Mayhem Lab/RACE Lab;
Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
Blog: http://techfeast-hiranya.blogspot.com


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Hiranya Jayathilaka <hi...@gmail.com>.
In the same manner, performance is also sensitive to the buffer size of ConnectionConfig.

Thanks,
Hiranya

On Nov 21, 2013, at 12:08 PM, Hiranya Jayathilaka <hi...@gmail.com> wrote:

> Hi Devs,
> 
> I just found out that the performance of the Synapse Pass Through transport is highly sensitive to the RcvBufferSize of the IO reactors (especially when mediating very large messages). Here are some test results. In this case, I'm simply passing through a 1M message through Synapse to a backend server, which simply echoes it back to the client. Notice how the execution time of the scenario varies with the RcvBufferSize of the IO reactors.
> 
> RcvBufferSize (in bytes)                  Scenario Execution Time (in seconds)
> ========================================================
> 8192 (Synapse default)                    25.9
> 16384                                                   0.4
> 32768                                                   0.2
> 
> Is this behavior normal? If so does it make sense to change the Synapse default buffer size to something larger (e.g. 16k)?
> 
> Interestingly I see this difference in behavior on Linux only. I cannot see a significant change in behavior on Mac. 
> 
> Appreciate your thoughts on this.
> 
> Thanks,
> Hiranya
> 
> --
> Hiranya Jayathilaka
> Mayhem Lab/RACE Lab;
> Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
> E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
> Blog: http://techfeast-hiranya.blogspot.com
> 

--
Hiranya Jayathilaka
Mayhem Lab/RACE Lab;
Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
Blog: http://techfeast-hiranya.blogspot.com


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Thu, 2013-11-21 at 12:08 -0800, Hiranya Jayathilaka wrote:
> Hi Devs,
> 
> I just found out that the performance of the Synapse Pass Through transport is highly sensitive to the RcvBufferSize of the IO reactors (especially when mediating very large messages). Here are some test results. In this case, I'm simply passing through a 1M message through Synapse to a backend server, which simply echoes it back to the client. Notice how the execution time of the scenario varies with the RcvBufferSize of the IO reactors.
> 
> RcvBufferSize (in bytes)                  Scenario Execution Time (in seconds)
> ========================================================
> 8192 (Synapse default)                    25.9
> 16384                                                   0.4
> 32768                                                   0.2
> 

Hiranya

After experimenting extensively with RCV/SND buffer settings (with mixed
results) my personal conclusion was that those settings were better left
alone (set to their system default values). 

I am however not a TCP/IP specialist by any stretch of imagination, so I
may be missing something important.

Oleg 

> Is this behavior normal? If so does it make sense to change the Synapse default buffer size to something larger (e.g. 16k)?
> 
> Interestingly I see this difference in behavior on Linux only. I cannot see a significant change in behavior on Mac. 
> 
> Appreciate your thoughts on this.
> 
> Thanks,
> Hiranya
> 
> --
> Hiranya Jayathilaka
> Mayhem Lab/RACE Lab;
> Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
> E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
> Blog: http://techfeast-hiranya.blogspot.com
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@synapse.apache.org
For additional commands, e-mail: dev-help@synapse.apache.org


Re: HTTP Core Performance and Reactor Buffer Size

Posted by Hiranya Jayathilaka <hi...@gmail.com>.
In the same manner, performance is also sensitive to the buffer size of ConnectionConfig.

Thanks,
Hiranya

On Nov 21, 2013, at 12:08 PM, Hiranya Jayathilaka <hi...@gmail.com> wrote:

> Hi Devs,
> 
> I just found out that the performance of the Synapse Pass Through transport is highly sensitive to the RcvBufferSize of the IO reactors (especially when mediating very large messages). Here are some test results. In this case, I'm simply passing through a 1M message through Synapse to a backend server, which simply echoes it back to the client. Notice how the execution time of the scenario varies with the RcvBufferSize of the IO reactors.
> 
> RcvBufferSize (in bytes)                  Scenario Execution Time (in seconds)
> ========================================================
> 8192 (Synapse default)                    25.9
> 16384                                                   0.4
> 32768                                                   0.2
> 
> Is this behavior normal? If so does it make sense to change the Synapse default buffer size to something larger (e.g. 16k)?
> 
> Interestingly I see this difference in behavior on Linux only. I cannot see a significant change in behavior on Mac. 
> 
> Appreciate your thoughts on this.
> 
> Thanks,
> Hiranya
> 
> --
> Hiranya Jayathilaka
> Mayhem Lab/RACE Lab;
> Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
> E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
> Blog: http://techfeast-hiranya.blogspot.com
> 

--
Hiranya Jayathilaka
Mayhem Lab/RACE Lab;
Dept. of Computer Science, UCSB;  http://cs.ucsb.edu
E-mail: hiranya@cs.ucsb.edu;  Mobile: +1 (805) 895-7443
Blog: http://techfeast-hiranya.blogspot.com