You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@ace.apache.org by Kl...@Gebit.de on 2014/10/07 18:18:13 UTC

Poor performance of the REST Client API with hundreds of targets.

Dear all,

I want to use ACE (version 2.0.1) for deploying POS (Point of sale) java 
clients. In my scenario I have hundreds or thousands of targets 
(POS-clients). I developed an API around the REST-API for creation of 
distributions which point to an associated feature which contains about 
100 bundles. On the other side I have developed an API (based on the 
REST-API) which creates targets and connects them to the distributions. I 
found out that when I import about 1000 targets then the allinone server 
becomes extremely slow, when managing each kind of data (creating and 
deleting distributions, targets). Allocating a working area needs about 
one minute or so. Also sometimes I get exceptions from the Felix 
preference service. Do you have already experiences whith a lot of 
targets, are there any issues known about performance ? Can you give me 
some ideas what I can do next to solve my problem.

Thanks in advance, and kind regards
Klaus Meyer

Re: Antwort: Re: Poor performance of the REST Client API with hundreds of targets.

Posted by Marcel Offermans <ma...@luminis.eu>.
Hello Klaus,

On 08 Oct 2014, at 12:07 pm, <Kl...@Gebit.de> <Kl...@Gebit.de> wrote:

> Thank you for the very fast answer. First I want to say that I'm very 
> impressed about the concepts, design and quality of ACE and I really want 
> to use it in a productional context.

That's a great compliment to everybody here, thanks!

> I did some further investigations of my performance issue and found out 
> that the remote access is not the problem. The checkout of the 
> RepositoryAdmin is the method were the time gets lost. So the problem 
> seems to be in the core of the ACE persistence layer ? This seems to be 
> reasonable because the checkout makes a local copy of all data as you 
> said, and when I have 1000 targets and some distributions with 100 
> artifacts, there is some work to do. Do see the chance to optimize it or 
> do you think there must be another problem ?

The initial checkout is indeed a time-consuming process. The data from the server is translated into an object graph. We already did some optimization in this area before releasing 2.0.1 but as you concluded, it is still a bit of a bottleneck. I mentioned using the client separately because that way you can also off-load all that initialization to a different computer. Another thing to consider is to keep the checked out workspace around as long as possible. Depending on your use case you might be able to checkout once, make changes, commit, make more changes, commit, etc. Not a solution, but maybe a useful workaround.

> Some thousands of targets is 
> a very typical requirement in our POS domain. The performance problem in 
> the moment is just in the area of management of the repositories not in 
> the area of downloading artifacts from targets. In the latter case I could 
> optimize it with the concept of relay servers if I understood right. 

That is correct.

There are ways of "partitioning" those targets that might be worth considering, if you can somehow partition your targets in groups?

> Another problem is that sometimes are  exceptions thrown from the 
> Preference Service:
> 
> org.osgi.service.prefs.BackingStoreException: Unable to load preferences.
>        at org.apache.felix.prefs.impl.DataFileBackingStoreImpl.load(
> DataFileBackingStoreImpl.java:164)
>        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.update(
> StreamBackingStoreImpl.java:102)
>        at org.apache.felix.prefs.PreferencesImpl.sync(
> PreferencesImpl.java:588)
>        at org.apache.felix.prefs.PreferencesImpl.childrenNames(
> PreferencesImpl.java:375)
>        at 
> org.apache.ace.client.repository.impl.RepositorySet.loadPreferences(
> RepositorySet.java:159)
>        at 
> org.apache.ace.client.repository.impl.RepositoryAdminImpl.login(
> RepositoryAdminImpl.java:404)
>        at 
> org.apache.ace.client.repository.impl.RepositoryAdminImpl.login(
> RepositoryAdminImpl.java:385)
>        at org.apache.ace.client.workspace.impl.WorkspaceImpl.login(
> WorkspaceImpl.java:151)
>        at 
> org.apache.ace.client.workspace.impl.WorkspaceManagerImpl.createWorkspace(
> WorkspaceManagerImpl.java:160)
>        at org.apache.ace.client.rest.RESTClientServlet.doPost(
> RESTClientServlet.java:264)
>        at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
>        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>        at 
> org.apache.felix.http.base.internal.handler.ServletHandler.doHandle(
> ServletHandler.java:96)
>        at 
> org.apache.felix.http.base.internal.handler.ServletHandler.handle(
> ServletHandler.java:79)
>        at 
> org.apache.felix.http.base.internal.dispatch.ServletPipeline.handle(
> ServletPipeline.java:42)
>        at 
> org.apache.felix.http.base.internal.dispatch.InvocationFilterChain.doFilter(
> InvocationFilterChain.java:49)
>        at 
> org.apache.felix.http.base.internal.dispatch.HttpFilterChain.doFilter(
> HttpFilterChain.java:33)
>        at 
> org.apache.felix.http.base.internal.dispatch.FilterPipeline.dispatch(
> FilterPipeline.java:48)
>        at 
> org.apache.felix.http.base.internal.dispatch.Dispatcher.dispatch(
> Dispatcher.java:39)
>        at org.apache.felix.http.base.internal.DispatcherServlet.service(
> DispatcherServlet.java:67)
>        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>        at org.eclipse.jetty.servlet.ServletHolder.handle(
> ServletHolder.java:654)
>        at org.eclipse.jetty.servlet.ServletHandler.doHandle(
> ServletHandler.java:445)
>        at org.eclipse.jetty.server.session.SessionHandler.doHandle(
> SessionHandler.java:225)
>        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(
> ContextHandler.java:1044)
>        at org.eclipse.jetty.servlet.ServletHandler.doScope(
> ServletHandler.java:372)
>        at org.eclipse.jetty.server.session.SessionHandler.doScope(
> SessionHandler.java:189)
>        at org.eclipse.jetty.server.handler.ContextHandler.doScope(
> ContextHandler.java:978)
>        at org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:135)
>        at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
> ContextHandlerCollection.java:255)
>        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> HandlerWrapper.java:116)
>        at org.eclipse.jetty.server.Server.handle(Server.java:369)
>        at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(
> AbstractHttpConnection.java:486)
>        at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(
> AbstractHttpConnection.java:933)
>        at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(
> AbstractHttpConnection.java:995)
>        at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644
> )
>        at org.eclipse.jetty.http.HttpParser.parseAvailable(
> HttpParser.java:235)
>        at org.eclipse.jetty.server.AsyncHttpConnection.handle(
> AsyncHttpConnection.java:82)
>        at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(
> SelectChannelEndPoint.java:668)
>        at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(
> SelectChannelEndPoint.java:52)
>        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
> QueuedThreadPool.java:608)
>        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(
> QueuedThreadPool.java:543)
>        at java.lang.Thread.run(Unknown Source)
> Caused by: java.io.StreamCorruptedException: unexpected EOF in middle of 
> data block
>        at java.io.ObjectInputStream$BlockDataInputStream.refill(Unknown 
> Source)
>        at java.io.ObjectInputStream$BlockDataInputStream.read(Unknown 
> Source)
>        at java.io.DataInputStream.readInt(Unknown Source)
>        at java.io.ObjectInputStream$BlockDataInputStream.readInt(Unknown 
> Source)
>        at java.io.ObjectInputStream.readInt(Unknown Source)
>        at 
> org.apache.felix.prefs.impl.StreamBackingStoreImpl.readPreferences(
> StreamBackingStoreImpl.java:165)
>        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
> StreamBackingStoreImpl.java:144)
>        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
> StreamBackingStoreImpl.java:152)
>        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
> StreamBackingStoreImpl.java:152)
>        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
> StreamBackingStoreImpl.java:152)
>        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
> StreamBackingStoreImpl.java:152)
>        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
> StreamBackingStoreImpl.java:152)
>        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
> StreamBackingStoreImpl.java:152)
>        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
> StreamBackingStoreImpl.java:152)
>        at org.apache.felix.prefs.impl.DataFileBackingStoreImpl.load(
> DataFileBackingStoreImpl.java:159)
>        ... 42 more
> 
> 
> Do you have any ideas ?

I'm not quite sure what causes this. Do you have a reproducable set of steps to get this exception? In any case, could you file a bug in Jira for it so we can investigate it further.

Greetings, Marcel


Antwort: Re: Poor performance of the REST Client API with hundreds of targets.

Posted by Kl...@Gebit.de.
Hello Marcel,

Thank you for the very fast answer. First I want to say that I'm very 
impressed about the concepts, design and quality of ACE and I really want 
to use it in a productional context.
I did some further investigations of my performance issue and found out 
that the remote access is not the problem. The checkout of the 
RepositoryAdmin is the method were the time gets lost. So the problem 
seems to be in the core of the ACE persistence layer ? This seems to be 
reasonable because the checkout makes a local copy of all data as you 
said, and when I have 1000 targets and some distributions with 100 
artifacts, there is some work to do. Do see the chance to optimize it or 
do you think there must be another problem ? Some thousands of targets is 
a very typical requirement in our POS domain. The performance problem in 
the moment is just in the area of management of the repositories not in 
the area of downloading artifacts from targets. In the latter case I could 
optimize it with the concept of relay servers if I understood right. 
Another problem is that sometimes are  exceptions thrown from the 
Preference Service:

org.osgi.service.prefs.BackingStoreException: Unable to load preferences.
        at org.apache.felix.prefs.impl.DataFileBackingStoreImpl.load(
DataFileBackingStoreImpl.java:164)
        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.update(
StreamBackingStoreImpl.java:102)
        at org.apache.felix.prefs.PreferencesImpl.sync(
PreferencesImpl.java:588)
        at org.apache.felix.prefs.PreferencesImpl.childrenNames(
PreferencesImpl.java:375)
        at 
org.apache.ace.client.repository.impl.RepositorySet.loadPreferences(
RepositorySet.java:159)
        at 
org.apache.ace.client.repository.impl.RepositoryAdminImpl.login(
RepositoryAdminImpl.java:404)
        at 
org.apache.ace.client.repository.impl.RepositoryAdminImpl.login(
RepositoryAdminImpl.java:385)
        at org.apache.ace.client.workspace.impl.WorkspaceImpl.login(
WorkspaceImpl.java:151)
        at 
org.apache.ace.client.workspace.impl.WorkspaceManagerImpl.createWorkspace(
WorkspaceManagerImpl.java:160)
        at org.apache.ace.client.rest.RESTClientServlet.doPost(
RESTClientServlet.java:264)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at 
org.apache.felix.http.base.internal.handler.ServletHandler.doHandle(
ServletHandler.java:96)
        at 
org.apache.felix.http.base.internal.handler.ServletHandler.handle(
ServletHandler.java:79)
        at 
org.apache.felix.http.base.internal.dispatch.ServletPipeline.handle(
ServletPipeline.java:42)
        at 
org.apache.felix.http.base.internal.dispatch.InvocationFilterChain.doFilter(
InvocationFilterChain.java:49)
        at 
org.apache.felix.http.base.internal.dispatch.HttpFilterChain.doFilter(
HttpFilterChain.java:33)
        at 
org.apache.felix.http.base.internal.dispatch.FilterPipeline.dispatch(
FilterPipeline.java:48)
        at 
org.apache.felix.http.base.internal.dispatch.Dispatcher.dispatch(
Dispatcher.java:39)
        at org.apache.felix.http.base.internal.DispatcherServlet.service(
DispatcherServlet.java:67)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.eclipse.jetty.servlet.ServletHolder.handle(
ServletHolder.java:654)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(
ServletHandler.java:445)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(
SessionHandler.java:225)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(
ContextHandler.java:1044)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(
ServletHandler.java:372)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(
SessionHandler.java:189)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(
ContextHandler.java:978)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(
ScopedHandler.java:135)
        at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
ContextHandlerCollection.java:255)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(
HandlerWrapper.java:116)
        at org.eclipse.jetty.server.Server.handle(Server.java:369)
        at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(
AbstractHttpConnection.java:486)
        at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(
AbstractHttpConnection.java:933)
        at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(
AbstractHttpConnection.java:995)
        at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644
)
        at org.eclipse.jetty.http.HttpParser.parseAvailable(
HttpParser.java:235)
        at org.eclipse.jetty.server.AsyncHttpConnection.handle(
AsyncHttpConnection.java:82)
        at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(
SelectChannelEndPoint.java:668)
        at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(
SelectChannelEndPoint.java:52)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
QueuedThreadPool.java:608)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(
QueuedThreadPool.java:543)
        at java.lang.Thread.run(Unknown Source)
Caused by: java.io.StreamCorruptedException: unexpected EOF in middle of 
data block
        at java.io.ObjectInputStream$BlockDataInputStream.refill(Unknown 
Source)
        at java.io.ObjectInputStream$BlockDataInputStream.read(Unknown 
Source)
        at java.io.DataInputStream.readInt(Unknown Source)
        at java.io.ObjectInputStream$BlockDataInputStream.readInt(Unknown 
Source)
        at java.io.ObjectInputStream.readInt(Unknown Source)
        at 
org.apache.felix.prefs.impl.StreamBackingStoreImpl.readPreferences(
StreamBackingStoreImpl.java:165)
        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
StreamBackingStoreImpl.java:144)
        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
StreamBackingStoreImpl.java:152)
        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
StreamBackingStoreImpl.java:152)
        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
StreamBackingStoreImpl.java:152)
        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
StreamBackingStoreImpl.java:152)
        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
StreamBackingStoreImpl.java:152)
        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
StreamBackingStoreImpl.java:152)
        at org.apache.felix.prefs.impl.StreamBackingStoreImpl.read(
StreamBackingStoreImpl.java:152)
        at org.apache.felix.prefs.impl.DataFileBackingStoreImpl.load(
DataFileBackingStoreImpl.java:159)
        ... 42 more


Do you have any ideas ?

Thank you in advance and kind regards

Klaus



Von:
Marcel Offermans <ma...@luminis.eu>
An:
ACE-users Apache ACE users <us...@ace.apache.org>
Datum:
07.10.2014 18:47
Betreff:
Re: Poor performance of the REST Client API with hundreds of targets.




Hello Klaus,

On 07 Oct 2014, at 18:18 pm, Klaus.Meyer@Gebit.de wrote:

> I want to use ACE (version 2.0.1) for deploying POS (Point of sale) java 

> clients. In my scenario I have hundreds or thousands of targets 
> (POS-clients). I developed an API around the REST-API for creation of 
> distributions which point to an associated feature which contains about 
> 100 bundles. On the other side I have developed an API (based on the 
> REST-API) which creates targets and connects them to the distributions. 
I 
> found out that when I import about 1000 targets then the allinone server 

> becomes extremely slow, when managing each kind of data (creating and 
> deleting distributions, targets). Allocating a working area needs about 
> one minute or so. Also sometimes I get exceptions from the Felix 
> preference service. Do you have already experiences whith a lot of 
> targets, are there any issues known about performance ? Can you give me 
> some ideas what I can do next to solve my problem.

Let me start by explaining a bit about how the ACE client code works, as 
that will give you more insight into this problem.

ACE consists of a server, a client and an OBR repository. Each can be 
deployed independently, but for convenience we also have an "all-in-one" 
version. The client is what you need to manipulate the repository, and 
fundamentally the client works by "checking out" a copy of the current 
configuration, manipulating it locally and, when done, "committing" the 
new configuration back to the server.

Now, the client has an OSGi API (services) you can talk to, and on top of 
that we built three different APIs:

1. The web UI based on Vaadin.
2. The REST API.
3. The Gogo Shell API.

The first two can work "over the network" with a browser or REST client 
being the primary users.
The last one works "on the client" and is therefore usually a lot faster. 
We use it for all kinds of scripted access such as in continuous 
integration scenarios.

That being said, I have a few recommendations:

First of all, if you build your own API, I would consider building it 
directly on top of the OSGi, service based API just like the 3 existing 
clients do. That will be a lot quicker than interfacing, over the network, 
with a remote client.

Second of all, you might want to consider the Gogo Shell API to script 
these things. We use this in continuous integration and if it helps I can 
provide some examples (but I probably need to know a bit more details 
about the deployment process you are using).

Finally, we have tested ACE with 100's to 1000's of targets. I would say 
that is possible. I currently would not go to 10.000's of targets, our 
benchmarks have shown that you are running into some limits there. 
However, we have options to partition the targets into smaller groups 
within ACE, so if necessary I can go into that a bit more and explore if 
that is an option that could help you. Also, one thing that slows ACE down 
is when you have "auto configuration" XML files which are templates that 
have parameters that need to be substituted for each target. We are still 
discussing ways to optimize that.

I hope this gives you some pointers/suggestions that help. Feel free to 
follow up if you have more questions.

Greetings, Marcel




Re: Poor performance of the REST Client API with hundreds of targets.

Posted by Marcel Offermans <ma...@luminis.eu>.
Hello Klaus,

On 07 Oct 2014, at 18:18 pm, Klaus.Meyer@Gebit.de wrote:

> I want to use ACE (version 2.0.1) for deploying POS (Point of sale) java 
> clients. In my scenario I have hundreds or thousands of targets 
> (POS-clients). I developed an API around the REST-API for creation of 
> distributions which point to an associated feature which contains about 
> 100 bundles. On the other side I have developed an API (based on the 
> REST-API) which creates targets and connects them to the distributions. I 
> found out that when I import about 1000 targets then the allinone server 
> becomes extremely slow, when managing each kind of data (creating and 
> deleting distributions, targets). Allocating a working area needs about 
> one minute or so. Also sometimes I get exceptions from the Felix 
> preference service. Do you have already experiences whith a lot of 
> targets, are there any issues known about performance ? Can you give me 
> some ideas what I can do next to solve my problem.

Let me start by explaining a bit about how the ACE client code works, as that will give you more insight into this problem.

ACE consists of a server, a client and an OBR repository. Each can be deployed independently, but for convenience we also have an "all-in-one" version. The client is what you need to manipulate the repository, and fundamentally the client works by "checking out" a copy of the current configuration, manipulating it locally and, when done, "committing" the new configuration back to the server.

Now, the client has an OSGi API (services) you can talk to, and on top of that we built three different APIs:

1. The web UI based on Vaadin.
2. The REST API.
3. The Gogo Shell API.

The first two can work "over the network" with a browser or REST client being the primary users.
The last one works "on the client" and is therefore usually a lot faster. We use it for all kinds of scripted access such as in continuous integration scenarios.

That being said, I have a few recommendations:

First of all, if you build your own API, I would consider building it directly on top of the OSGi, service based API just like the 3 existing clients do. That will be a lot quicker than interfacing, over the network, with a remote client.

Second of all, you might want to consider the Gogo Shell API to script these things. We use this in continuous integration and if it helps I can provide some examples (but I probably need to know a bit more details about the deployment process you are using).

Finally, we have tested ACE with 100's to 1000's of targets. I would say that is possible. I currently would not go to 10.000's of targets, our benchmarks have shown that you are running into some limits there. However, we have options to partition the targets into smaller groups within ACE, so if necessary I can go into that a bit more and explore if that is an option that could help you. Also, one thing that slows ACE down is when you have "auto configuration" XML files which are templates that have parameters that need to be substituted for each target. We are still discussing ways to optimize that.

I hope this gives you some pointers/suggestions that help. Feel free to follow up if you have more questions.

Greetings, Marcel