You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@synapse.apache.org by Andreas Veithen <an...@skynet.be> on 2008/01/05 00:14:39 UTC

Using some sort of in-VM transport between Synapse and a web app deployed on the same server?

Hi all!

Consider the following use case for Synapse:

* Synapse is deployed as a Web application.
* The Synapse configuration has a service proxy that targets a service  
exposed by another Web application deployed on the same server.
* Both the proxy and the target use HTTP as transport protocol.
* The target Web application neither uses Axis, nor is it reasonable  
to change it in any way.

The obvious solution is to use HTTP to communicate with the target  
service. However using HTTP to communicate between two Web  
applications deployed on the same server, i.e. inside the same VM,  
seems a bit like an overkill. Therefore my question is if there is  
some sort of a local transport implementation that would allow to do  
this in a more efficient way. Obviously the local transport defined in  
Axis2 can't be used here, because it would require changes to the  
target application. Also the fact that each of the two Web  
applications has its own class loader would make this tricky.

It appears that the servlet specification actually defines a way for  
two Web applications hosted in the same container to communicate with  
each other directly, i.e. in VM. Indeed a Web application can obtain  
the ServletContext of another Web application using  
ServletContext#getContext. Usually the server must be configured to  
allow this kind of cross context access, but this is supported by  
Tomcat (by specifying crossContext="true" in the context file). Using  
this mechanism, the Web applications can communicate in two different  
ways:

* By sharing state using get/setAttribute.
* By acquiring a RequestDispatcher for a servlet from the foreign  
context and invoking forward or include on this object.

The first approach can be discarded, again because of class loader  
issues and the fact that it would require changes to the second  
application. The second approach is more promising because it allows  
to invoke the servlet that implements the target service. Note however  
that there is an important restriction imposed by the servlet  
specification on the use of RequestDispatcher#include/forward: "The  
request and response parameters must be either the same objects as  
were passed to the calling servlet’s service method or be subclasses  
of the ServletRequestWrapper or ServletResponseWrapper classes that  
wrap them." This means that these methods can't be called with  
arbitrary HttpServletRequest/Response implementations. Obviously this  
requirement is there to guarantee that the server gets access to the  
original request/response objects. It could be that some servers do  
not enforce this requirement, but for those that do, the practical  
consequence is that the RequestDispatcher approach would only work if  
the original request is received by SynapseAxisServlet, which is  
perfectly compatible with the use case described here.

My question is whether anybody on this list is aware of an existing  
Axis2 transport implementation that uses this approach or if anybody  
here sees any obstacle that would make it impossible to implement this  
kind of transport.

Regards,

Andreas


---------------------------------------------------------------------
To unsubscribe, e-mail: synapse-user-unsubscribe@ws.apache.org
For additional commands, e-mail: synapse-user-help@ws.apache.org


Re: Using some sort of in-VM transport between Synapse and a web app deployed on the same server?

Posted by Andreas Veithen <an...@gmail.com>.
Finally, we had no other choice than using HTTP.

Andreas

On Tue, Mar 24, 2009 at 04:28, jay white
<ag...@spamcorptastic.com> wrote:
>
> I also have a similar requirement. What solution did you use for
> communication between synapse and webservice hosted in the same jvm?
>
> Andreas Veithen wrote:
>>
>>
>> On 05 Jan 2008, at 03:55, Asankha C. Perera wrote:
>>
>>> Andreas
>>>> * By acquiring a RequestDispatcher for a servlet from the foreign
>>>> context and invoking forward or include on this object.
>>>>
>>>> The first approach can be discarded, again because of class loader
>>>> issues and the fact that it would require changes to the second
>>>> application. The second approach is more promising because it allows
>>>> to invoke the servlet that implements the target service. Note
>>>> however
>>>> that there is an important restriction imposed by the servlet
>>>> specification on the use of RequestDispatcher#include/forward: "The
>>>> request and response parameters must be either the same objects as
>>>> were passed to the calling servlet’s service method or be subclasses
>>>> of the ServletRequestWrapper or ServletResponseWrapper classes that
>>>> wrap them." This means that these methods can't be called with
>>>> arbitrary HttpServletRequest/Response implementations. Obviously this
>>>> requirement is there to guarantee that the server gets access to the
>>>> original request/response objects. It could be that some servers do
>>>> not enforce this requirement, but for those that do, the practical
>>>> consequence is that the RequestDispatcher approach would only work if
>>>> the original request is received by SynapseAxisServlet, which is
>>>> perfectly compatible with the use case described here.
>>>>
>>>> My question is whether anybody on this list is aware of an existing
>>>> Axis2 transport implementation that uses this approach or if anybody
>>>> here sees any obstacle that would make it impossible to implement
>>>> this
>>>> kind of transport.
>>> Since Synapse would most frequently modify the original request and
>>> forward a new (e.g. transformed request of the original) to the back
>>> end
>>> service, I doubt if this approach would be feasible.. however,
>>> although
>>> you could theoretically use Synapse with a servlet transport, we built
>>> the non-blocking http/s transports for Synapse, as acting as an ESB it
>>> needs to handle many connections concurrently passing them between the
>>> requester and the actual service. If using a servlet transport, the
>>> thread invoking Synapse would block until the backend service replies,
>>> and this could lead to the thread pool being exhausted very soon. The
>>> objective of the non-blocking http/s transport is to prevent this
>>> and it
>>> works very well when load tested..
>>
>> I understand that in a full-blown ESB use case where you have many
>> clients connecting to Synapse and where Synapse dispatches those
>> requests to many different backend services, the servlet transport is
>> not a good solution because you will see many worker threads sleeping
>> while waiting for responses from remote services, causing the thread
>> pool of the servlet container to be exhausted very soon. However the
>> use case considered here is different: all incoming requests are
>> transformed and dispatched to a target service deployed in the same
>> container. In this scenario thread pool exhaustion is not a problem
>> because each sleeping worker thread used by Synapse will be waiting
>> for one and only one worker thread used by the target service.
>> Therefore there will never be more than 50% of the container's worker
>> threads used by Synapse and waiting. This implies that more worker
>> threads (roughly the double) are needed to get the same level of
>> concurrency, but this is not a problem because the first limit that
>> will be reached is the number of concurrent requests the target
>> service is able to handle.
>>
>> Note that if a RequestDispatcher could be used to forward the
>> transformed request to the target service, thread allocation would be
>> optimal, because the whole end-to-end processing (Synapse -> target
>> service -> Synapse) would be done by a single worker thread.
>>
>>> Anyway when two apps communicate on
>>> the same host, the TCP overheads are reduced AFAIK by the OS's and the
>>> calls passed through locally
>>>
>>
>> I don't know how Windows works, but for sure, on Unix systems local
>> traffic will have to go through the whole TCP/IP stack down to the IP
>> level. The only optimization is that the MTU of the loopback interface
>> is higher than on a normal network interface, meaning that the TCP
>> stream is broken up into larger segments.
>>
>> Regards,
>>
>> Andreas
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: synapse-user-unsubscribe@ws.apache.org
>> For additional commands, e-mail: synapse-user-help@ws.apache.org
>>
>>
>>
>
> --
> View this message in context: http://www.nabble.com/Using-some-sort-of-in-VM-transport-between-Synapse-and-a-web-app-deployed-on-the-same-server--tp14627006p22673714.html
> Sent from the Synapse - User mailing list archive at Nabble.com.
>
>

Re: Using some sort of in-VM transport between Synapse and a web app deployed on the same server?

Posted by jay white <ag...@spamcorptastic.com>.
I also have a similar requirement. What solution did you use for
communication between synapse and webservice hosted in the same jvm?

Andreas Veithen wrote:
> 
> 
> On 05 Jan 2008, at 03:55, Asankha C. Perera wrote:
> 
>> Andreas
>>> * By acquiring a RequestDispatcher for a servlet from the foreign
>>> context and invoking forward or include on this object.
>>>
>>> The first approach can be discarded, again because of class loader
>>> issues and the fact that it would require changes to the second
>>> application. The second approach is more promising because it allows
>>> to invoke the servlet that implements the target service. Note  
>>> however
>>> that there is an important restriction imposed by the servlet
>>> specification on the use of RequestDispatcher#include/forward: "The
>>> request and response parameters must be either the same objects as
>>> were passed to the calling servlet’s service method or be subclasses
>>> of the ServletRequestWrapper or ServletResponseWrapper classes that
>>> wrap them." This means that these methods can't be called with
>>> arbitrary HttpServletRequest/Response implementations. Obviously this
>>> requirement is there to guarantee that the server gets access to the
>>> original request/response objects. It could be that some servers do
>>> not enforce this requirement, but for those that do, the practical
>>> consequence is that the RequestDispatcher approach would only work if
>>> the original request is received by SynapseAxisServlet, which is
>>> perfectly compatible with the use case described here.
>>>
>>> My question is whether anybody on this list is aware of an existing
>>> Axis2 transport implementation that uses this approach or if anybody
>>> here sees any obstacle that would make it impossible to implement  
>>> this
>>> kind of transport.
>> Since Synapse would most frequently modify the original request and
>> forward a new (e.g. transformed request of the original) to the back  
>> end
>> service, I doubt if this approach would be feasible.. however,  
>> although
>> you could theoretically use Synapse with a servlet transport, we built
>> the non-blocking http/s transports for Synapse, as acting as an ESB it
>> needs to handle many connections concurrently passing them between the
>> requester and the actual service. If using a servlet transport, the
>> thread invoking Synapse would block until the backend service replies,
>> and this could lead to the thread pool being exhausted very soon. The
>> objective of the non-blocking http/s transport is to prevent this  
>> and it
>> works very well when load tested..
> 
> I understand that in a full-blown ESB use case where you have many  
> clients connecting to Synapse and where Synapse dispatches those  
> requests to many different backend services, the servlet transport is  
> not a good solution because you will see many worker threads sleeping  
> while waiting for responses from remote services, causing the thread  
> pool of the servlet container to be exhausted very soon. However the  
> use case considered here is different: all incoming requests are  
> transformed and dispatched to a target service deployed in the same  
> container. In this scenario thread pool exhaustion is not a problem  
> because each sleeping worker thread used by Synapse will be waiting  
> for one and only one worker thread used by the target service.  
> Therefore there will never be more than 50% of the container's worker  
> threads used by Synapse and waiting. This implies that more worker  
> threads (roughly the double) are needed to get the same level of  
> concurrency, but this is not a problem because the first limit that  
> will be reached is the number of concurrent requests the target  
> service is able to handle.
> 
> Note that if a RequestDispatcher could be used to forward the  
> transformed request to the target service, thread allocation would be  
> optimal, because the whole end-to-end processing (Synapse -> target  
> service -> Synapse) would be done by a single worker thread.
> 
>> Anyway when two apps communicate on
>> the same host, the TCP overheads are reduced AFAIK by the OS's and the
>> calls passed through locally
>>
> 
> I don't know how Windows works, but for sure, on Unix systems local  
> traffic will have to go through the whole TCP/IP stack down to the IP  
> level. The only optimization is that the MTU of the loopback interface  
> is higher than on a normal network interface, meaning that the TCP  
> stream is broken up into larger segments.
> 
> Regards,
> 
> Andreas
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: synapse-user-unsubscribe@ws.apache.org
> For additional commands, e-mail: synapse-user-help@ws.apache.org
> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Using-some-sort-of-in-VM-transport-between-Synapse-and-a-web-app-deployed-on-the-same-server--tp14627006p22673714.html
Sent from the Synapse - User mailing list archive at Nabble.com.


Re: Using some sort of in-VM transport between Synapse and a web app deployed on the same server?

Posted by Andreas Veithen <an...@skynet.be>.
On 05 Jan 2008, at 03:55, Asankha C. Perera wrote:

> Andreas
>> * By acquiring a RequestDispatcher for a servlet from the foreign
>> context and invoking forward or include on this object.
>>
>> The first approach can be discarded, again because of class loader
>> issues and the fact that it would require changes to the second
>> application. The second approach is more promising because it allows
>> to invoke the servlet that implements the target service. Note  
>> however
>> that there is an important restriction imposed by the servlet
>> specification on the use of RequestDispatcher#include/forward: "The
>> request and response parameters must be either the same objects as
>> were passed to the calling servlet’s service method or be subclasses
>> of the ServletRequestWrapper or ServletResponseWrapper classes that
>> wrap them." This means that these methods can't be called with
>> arbitrary HttpServletRequest/Response implementations. Obviously this
>> requirement is there to guarantee that the server gets access to the
>> original request/response objects. It could be that some servers do
>> not enforce this requirement, but for those that do, the practical
>> consequence is that the RequestDispatcher approach would only work if
>> the original request is received by SynapseAxisServlet, which is
>> perfectly compatible with the use case described here.
>>
>> My question is whether anybody on this list is aware of an existing
>> Axis2 transport implementation that uses this approach or if anybody
>> here sees any obstacle that would make it impossible to implement  
>> this
>> kind of transport.
> Since Synapse would most frequently modify the original request and
> forward a new (e.g. transformed request of the original) to the back  
> end
> service, I doubt if this approach would be feasible.. however,  
> although
> you could theoretically use Synapse with a servlet transport, we built
> the non-blocking http/s transports for Synapse, as acting as an ESB it
> needs to handle many connections concurrently passing them between the
> requester and the actual service. If using a servlet transport, the
> thread invoking Synapse would block until the backend service replies,
> and this could lead to the thread pool being exhausted very soon. The
> objective of the non-blocking http/s transport is to prevent this  
> and it
> works very well when load tested..

I understand that in a full-blown ESB use case where you have many  
clients connecting to Synapse and where Synapse dispatches those  
requests to many different backend services, the servlet transport is  
not a good solution because you will see many worker threads sleeping  
while waiting for responses from remote services, causing the thread  
pool of the servlet container to be exhausted very soon. However the  
use case considered here is different: all incoming requests are  
transformed and dispatched to a target service deployed in the same  
container. In this scenario thread pool exhaustion is not a problem  
because each sleeping worker thread used by Synapse will be waiting  
for one and only one worker thread used by the target service.  
Therefore there will never be more than 50% of the container's worker  
threads used by Synapse and waiting. This implies that more worker  
threads (roughly the double) are needed to get the same level of  
concurrency, but this is not a problem because the first limit that  
will be reached is the number of concurrent requests the target  
service is able to handle.

Note that if a RequestDispatcher could be used to forward the  
transformed request to the target service, thread allocation would be  
optimal, because the whole end-to-end processing (Synapse -> target  
service -> Synapse) would be done by a single worker thread.

> Anyway when two apps communicate on
> the same host, the TCP overheads are reduced AFAIK by the OS's and the
> calls passed through locally
>

I don't know how Windows works, but for sure, on Unix systems local  
traffic will have to go through the whole TCP/IP stack down to the IP  
level. The only optimization is that the MTU of the loopback interface  
is higher than on a normal network interface, meaning that the TCP  
stream is broken up into larger segments.

Regards,

Andreas


---------------------------------------------------------------------
To unsubscribe, e-mail: axis-user-unsubscribe@ws.apache.org
For additional commands, e-mail: axis-user-help@ws.apache.org


Re: Using some sort of in-VM transport between Synapse and a web app deployed on the same server?

Posted by Andreas Veithen <an...@skynet.be>.
On 05 Jan 2008, at 03:55, Asankha C. Perera wrote:

> Andreas
>> * By acquiring a RequestDispatcher for a servlet from the foreign
>> context and invoking forward or include on this object.
>>
>> The first approach can be discarded, again because of class loader
>> issues and the fact that it would require changes to the second
>> application. The second approach is more promising because it allows
>> to invoke the servlet that implements the target service. Note  
>> however
>> that there is an important restriction imposed by the servlet
>> specification on the use of RequestDispatcher#include/forward: "The
>> request and response parameters must be either the same objects as
>> were passed to the calling servlet’s service method or be subclasses
>> of the ServletRequestWrapper or ServletResponseWrapper classes that
>> wrap them." This means that these methods can't be called with
>> arbitrary HttpServletRequest/Response implementations. Obviously this
>> requirement is there to guarantee that the server gets access to the
>> original request/response objects. It could be that some servers do
>> not enforce this requirement, but for those that do, the practical
>> consequence is that the RequestDispatcher approach would only work if
>> the original request is received by SynapseAxisServlet, which is
>> perfectly compatible with the use case described here.
>>
>> My question is whether anybody on this list is aware of an existing
>> Axis2 transport implementation that uses this approach or if anybody
>> here sees any obstacle that would make it impossible to implement  
>> this
>> kind of transport.
> Since Synapse would most frequently modify the original request and
> forward a new (e.g. transformed request of the original) to the back  
> end
> service, I doubt if this approach would be feasible.. however,  
> although
> you could theoretically use Synapse with a servlet transport, we built
> the non-blocking http/s transports for Synapse, as acting as an ESB it
> needs to handle many connections concurrently passing them between the
> requester and the actual service. If using a servlet transport, the
> thread invoking Synapse would block until the backend service replies,
> and this could lead to the thread pool being exhausted very soon. The
> objective of the non-blocking http/s transport is to prevent this  
> and it
> works very well when load tested..

I understand that in a full-blown ESB use case where you have many  
clients connecting to Synapse and where Synapse dispatches those  
requests to many different backend services, the servlet transport is  
not a good solution because you will see many worker threads sleeping  
while waiting for responses from remote services, causing the thread  
pool of the servlet container to be exhausted very soon. However the  
use case considered here is different: all incoming requests are  
transformed and dispatched to a target service deployed in the same  
container. In this scenario thread pool exhaustion is not a problem  
because each sleeping worker thread used by Synapse will be waiting  
for one and only one worker thread used by the target service.  
Therefore there will never be more than 50% of the container's worker  
threads used by Synapse and waiting. This implies that more worker  
threads (roughly the double) are needed to get the same level of  
concurrency, but this is not a problem because the first limit that  
will be reached is the number of concurrent requests the target  
service is able to handle.

Note that if a RequestDispatcher could be used to forward the  
transformed request to the target service, thread allocation would be  
optimal, because the whole end-to-end processing (Synapse -> target  
service -> Synapse) would be done by a single worker thread.

> Anyway when two apps communicate on
> the same host, the TCP overheads are reduced AFAIK by the OS's and the
> calls passed through locally
>

I don't know how Windows works, but for sure, on Unix systems local  
traffic will have to go through the whole TCP/IP stack down to the IP  
level. The only optimization is that the MTU of the loopback interface  
is higher than on a normal network interface, meaning that the TCP  
stream is broken up into larger segments.

Regards,

Andreas


---------------------------------------------------------------------
To unsubscribe, e-mail: synapse-user-unsubscribe@ws.apache.org
For additional commands, e-mail: synapse-user-help@ws.apache.org


Re: Using some sort of in-VM transport between Synapse and a web app deployed on the same server?

Posted by "Asankha C. Perera" <as...@wso2.com>.
Andreas
> * By acquiring a RequestDispatcher for a servlet from the foreign
> context and invoking forward or include on this object.
>
> The first approach can be discarded, again because of class loader
> issues and the fact that it would require changes to the second
> application. The second approach is more promising because it allows
> to invoke the servlet that implements the target service. Note however
> that there is an important restriction imposed by the servlet
> specification on the use of RequestDispatcher#include/forward: "The
> request and response parameters must be either the same objects as
> were passed to the calling servlet’s service method or be subclasses
> of the ServletRequestWrapper or ServletResponseWrapper classes that
> wrap them." This means that these methods can't be called with
> arbitrary HttpServletRequest/Response implementations. Obviously this
> requirement is there to guarantee that the server gets access to the
> original request/response objects. It could be that some servers do
> not enforce this requirement, but for those that do, the practical
> consequence is that the RequestDispatcher approach would only work if
> the original request is received by SynapseAxisServlet, which is
> perfectly compatible with the use case described here.
>
> My question is whether anybody on this list is aware of an existing
> Axis2 transport implementation that uses this approach or if anybody
> here sees any obstacle that would make it impossible to implement this
> kind of transport.
Since Synapse would most frequently modify the original request and
forward a new (e.g. transformed request of the original) to the back end
service, I doubt if this approach would be feasible.. however, although
you could theoretically use Synapse with a servlet transport, we built
the non-blocking http/s transports for Synapse, as acting as an ESB it
needs to handle many connections concurrently passing them between the
requester and the actual service. If using a servlet transport, the
thread invoking Synapse would block until the backend service replies,
and this could lead to the thread pool being exhausted very soon. The
objective of the non-blocking http/s transport is to prevent this and it
works very well when load tested.. Anyway when two apps communicate on
the same host, the TCP overheads are reduced AFAIK by the OS's and the
calls passed through locally

asankha

---------------------------------------------------------------------
To unsubscribe, e-mail: axis-user-unsubscribe@ws.apache.org
For additional commands, e-mail: axis-user-help@ws.apache.org


Re: Using some sort of in-VM transport between Synapse and a web app deployed on the same server?

Posted by "Asankha C. Perera" <as...@wso2.com>.
Andreas
> * By acquiring a RequestDispatcher for a servlet from the foreign
> context and invoking forward or include on this object.
>
> The first approach can be discarded, again because of class loader
> issues and the fact that it would require changes to the second
> application. The second approach is more promising because it allows
> to invoke the servlet that implements the target service. Note however
> that there is an important restriction imposed by the servlet
> specification on the use of RequestDispatcher#include/forward: "The
> request and response parameters must be either the same objects as
> were passed to the calling servlet’s service method or be subclasses
> of the ServletRequestWrapper or ServletResponseWrapper classes that
> wrap them." This means that these methods can't be called with
> arbitrary HttpServletRequest/Response implementations. Obviously this
> requirement is there to guarantee that the server gets access to the
> original request/response objects. It could be that some servers do
> not enforce this requirement, but for those that do, the practical
> consequence is that the RequestDispatcher approach would only work if
> the original request is received by SynapseAxisServlet, which is
> perfectly compatible with the use case described here.
>
> My question is whether anybody on this list is aware of an existing
> Axis2 transport implementation that uses this approach or if anybody
> here sees any obstacle that would make it impossible to implement this
> kind of transport.
Since Synapse would most frequently modify the original request and
forward a new (e.g. transformed request of the original) to the back end
service, I doubt if this approach would be feasible.. however, although
you could theoretically use Synapse with a servlet transport, we built
the non-blocking http/s transports for Synapse, as acting as an ESB it
needs to handle many connections concurrently passing them between the
requester and the actual service. If using a servlet transport, the
thread invoking Synapse would block until the backend service replies,
and this could lead to the thread pool being exhausted very soon. The
objective of the non-blocking http/s transport is to prevent this and it
works very well when load tested.. Anyway when two apps communicate on
the same host, the TCP overheads are reduced AFAIK by the OS's and the
calls passed through locally

asankha

---------------------------------------------------------------------
To unsubscribe, e-mail: synapse-user-unsubscribe@ws.apache.org
For additional commands, e-mail: synapse-user-help@ws.apache.org