You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@geode.apache.org by Paul Perez <pa...@pymma.com> on 2017/01/17 13:45:59 UTC

sockets Message fetchHeader time consuming

Hello all, 

 

We try to save log events from OpenESB in Geode. The events are simple java
objects with 20 elements such as int, long, String or Date.

The events implement the interface serializable. 

On a single machine OpenESB + the BPEL Engine + SOAP connector + the
complete process to generate the event cost around 15% of the CPU. When we
uncomment the sent to the cache we double the percent of CPU consumed around
25-35 %. 

On our machine we created a locator and two servers with the default
parameters (just start server). Then with the locator we created three
regions with the default parameter (just create region).

 

On OpenESB code, we added a cache client to send the messages to their
region and for development purpose two server instances have been started on
the same machine. (No issue with the memory at all). 

 

// Create ClientCache with and access to a locator defined by its host and
port 

 clientCache = new
ClientCacheFactory().addPoolLocator(host,port).set("cache-xml-file",
"config/clientCache.xml").create();   

 

Cache XML content 

<?xml version="1.0" encoding="UTF-8"?>

<client-cache 

    xmlns="http://geode.apache.org/schema/cache" 

    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 

    xsi:schemaLocation="http://geode.apache.org/schema/cache
http://geode.apache.org/schema/cache/cache-1.0.xsd" 

    version="1.0">   

    <region name="bpelEvent" refid="PROXY"/>

    <region name="activityEvent" refid="PROXY"/>

    <region name="variableEvent" refid="PROXY"/>

</client-cache>

 

 

 

We thought at the beginning that the object serialisation could be the issue
to work on. 

We the visual VM we noticed that a method named
org.apache.geode.internal.tiers.sockets.Message.fetchHeader () consume more
than 60% of the global process that seem strange regarding the the number of
classes in the process.

We inject around 15000 message per second spread on three different regions
. (all element on the same machine = No network latency but with on client
cache only).

Does someone has ideas on the reason for such time consuming by  the socket.


Thank you for your help 

 

 

Paul 

 

 



 

 


RE: sockets Message fetchHeader time consuming

Posted by Paul Perez <pa...@pymma.com>.
Hello 

Thank you for your reply. 

Yes you are right and we made a bad evaluation of the elements provided by the profiling tool.

At the CPU level we don’t have such use coming from Message.fetchHeader().

Your reply that helped us to review our diagnostic and focus our search on other topic and of course save time.

Thank you very much.

 

Best regards

 

Paul Perez Chief Architect

Pymma Consulting

--------------------------

Tel: +44 79 44 36 04 65 

Skype ID : polperez

 

From: Darrel Schneider [mailto:dschneider@pivotal.io] 
Sent: 17 January 2017 17:26
To: user@geode.apache.org; paul.perez@pymma.com
Cc: lklein@pivotal.io; David Brassely <br...@gmail.com>
Subject: Re: sockets Message fetchHeader time consuming

 

On a geode CacheServer (that reads messages from clients) geode creates a ServerConnection thread for every incoming client connection. When those threads are idle they are blocked waiting for something to read from the client. The place they block is Message.fetchHeader.

Here is an example of a stack dump I have of an idle ServerConnection thread:

  "ServerConnection on port 16501 Thread 255" tid=0x3b83 (in native)

      java.lang.Thread.State: RUNNABLE

            at java.net.SocketInputStream.socketRead0(Native Method)

            at java.net.SocketInputStream.read(SocketInputStream.java:152)

            at java.net.SocketInputStream.read(SocketInputStream.java:122)

            at com.gemstone.gemfire.internal.cache.tier.sockets.Message.fetchHeader(Message.java:637)

            at com.gemstone.gemfire.internal.cache.tier.sockets.Message.readHeaderAndPayload(Message.java:661)

            at com.gemstone.gemfire.internal.cache.tier.sockets.Message.read(Message.java:604)

            at com.gemstone.gemfire.internal.cache.tier.sockets.Message.recv(Message.java:1104)

            -  locked java.nio.HeapByteBuffer@60336b21 <ma...@60336b21> 

            at com.gemstone.gemfire.internal.cache.tier.sockets.Message.recv(Message.java:1118)

            at com.gemstone.gemfire.internal.cache.tier.sockets.BaseCommand.readRequest(BaseCommand.java:1003)

            at com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:760)

            at com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.doOneMessage(ServerConnection.java:942)

            at com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1192)

            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

            at com.gemstone.gemfire.internal.cache.tier.sockets.AcceptorImpl$1$1.run(AcceptorImpl.java:548)

            at java.lang.Thread.run(Thread.java:745)

 

I notice you also have a "Thread CPU Time" tab in your tool. It should verify that this method is not consuming cpu.

 

On Tue, Jan 17, 2017 at 5:45 AM, Paul Perez <paul.perez@pymma.com <ma...@pymma.com> > wrote:

Hello all, 

 

We try to save log events from OpenESB in Geode. The events are simple java objects with 20 elements such as int, long, String or Date.

The events implement the interface serializable. 

On a single machine OpenESB + the BPEL Engine + SOAP connector + the complete process to generate the event cost around 15% of the CPU. When we uncomment the sent to the cache we double the percent of CPU consumed around 25-35 %. 

On our machine we created a locator and two servers with the default parameters (just start server). Then with the locator we created three regions with the default parameter (just create region).

 

On OpenESB code, we added a cache client to send the messages to their region and for development purpose two server instances have been started on the same machine. (No issue with the memory at all). 

 

// Create ClientCache with and access to a locator defined by its host and port 

 clientCache = new ClientCacheFactory().addPoolLocator(host,port).set("cache-xml-file", "config/clientCache.xml").create();   

 

Cache XML content 

<?xml version="1.0" encoding="UTF-8"?>

<client-cache 

    xmlns="http://geode.apache.org/schema/cache" 

    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 

    xsi:schemaLocation="http://geode.apache.org/schema/cache http://geode.apache.org/schema/cache/cache-1.0.xsd" 

    version="1.0">   

    <region name="bpelEvent" refid="PROXY"/>

    <region name="activityEvent" refid="PROXY"/>

    <region name="variableEvent" refid="PROXY"/>

</client-cache>

 

 

 

We thought at the beginning that the object serialisation could be the issue to work on. 

We the visual VM we noticed that a method named org.apache.geode.internal.tiers.sockets.Message.fetchHeader () consume more than 60% of the global process that seem strange regarding the the number of classes in the process.

We inject around 15000 message per second spread on three different regions . (all element on the same machine = No network latency but with on client cache only).

Does someone has ideas on the reason for such time consuming by  the socket. 

Thank you for your help 

 

 

Paul 

 

 



 

 

 


Re: sockets Message fetchHeader time consuming

Posted by Darrel Schneider <ds...@pivotal.io>.
On a geode CacheServer (that reads messages from clients) geode creates a
ServerConnection thread for every incoming client connection. When those
threads are idle they are blocked waiting for something to read from the
client. The place they block is Message.fetchHeader.
Here is an example of a stack dump I have of an idle ServerConnection
thread:
  "ServerConnection on port 16501 Thread 255" tid=0x3b83 (in native)
      java.lang.Thread.State: RUNNABLE
  at java.net.SocketInputStream.socketRead0(Native Method)
  at java.net.SocketInputStream.read(SocketInputStream.java:152)
  at java.net.SocketInputStream.read(SocketInputStream.java:122)
  at
com.gemstone.gemfire.internal.cache.tier.sockets.Message.fetchHeader(Message.java:637)
  at
com.gemstone.gemfire.internal.cache.tier.sockets.Message.readHeaderAndPayload(Message.java:661)
  at
com.gemstone.gemfire.internal.cache.tier.sockets.Message.read(Message.java:604)
  at
com.gemstone.gemfire.internal.cache.tier.sockets.Message.recv(Message.java:1104)
  -  locked java.nio.HeapByteBuffer@60336b21
  at
com.gemstone.gemfire.internal.cache.tier.sockets.Message.recv(Message.java:1118)
  at
com.gemstone.gemfire.internal.cache.tier.sockets.BaseCommand.readRequest(BaseCommand.java:1003)
  at
com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:760)
  at
com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.doOneMessage(ServerConnection.java:942)
  at
com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1192)
  at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  at
com.gemstone.gemfire.internal.cache.tier.sockets.AcceptorImpl$1$1.run(AcceptorImpl.java:548)
  at java.lang.Thread.run(Thread.java:745)

I notice you also have a "Thread CPU Time" tab in your tool. It should
verify that this method is not consuming cpu.

On Tue, Jan 17, 2017 at 5:45 AM, Paul Perez <pa...@pymma.com> wrote:

> Hello all,
>
>
>
> We try to save log events from OpenESB in Geode. The events are simple
> java objects with 20 elements such as int, long, String or Date.
>
> The events implement the interface serializable.
>
> On a single machine OpenESB + the BPEL Engine + SOAP connector + the
> complete process to generate the event cost around 15% of the CPU. When we
> uncomment the sent to the cache we double the percent of CPU consumed
> around 25-35 %.
>
> On our machine we created a locator and two servers with the default
> parameters (just start server). Then with the locator we created three
> regions with the default parameter (just create region).
>
>
>
> On OpenESB code, we added a cache client to send the messages to their
> region and for development purpose two server instances have been started
> on the same machine. (No issue with the memory at all).
>
>
>
> *// Create ClientCache with and access to a locator defined by its host
> and port *
>
> * clientCache = new
> ClientCacheFactory().addPoolLocator(host,port).set("cache-xml-file",
> "config/clientCache.xml").create();   *
>
>
>
> Cache XML content
>
> <?xml version="1.0" encoding="UTF-8"?>
>
> <client-cache
>
>     xmlns="http://geode.apache.org/schema/cache"
>
>     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>
>     xsi:schemaLocation="http://geode.apache.org/schema/cache
> http://geode.apache.org/schema/cache/cache-1.0.xsd"
>
>     version="1.0">
>
>     <region name="bpelEvent" refid="PROXY"/>
>
>     <region name="activityEvent" refid="PROXY"/>
>
>     <region name="variableEvent" refid="PROXY"/>
>
> </client-cache>
>
>
>
>
>
>
>
> We thought at the beginning that the object serialisation could be the
> issue to work on.
>
> We the visual VM we noticed that a method named org.apache.geode.internal.
> tiers.sockets.Message.fetchHeader () consume more than 60% of the global
> process that seem strange regarding the the number of classes in the
> process.
>
> We inject around 15000 message per second spread on three different
> regions . (all element on the same machine = No network latency but with on
> client cache only).
>
> Does someone has ideas on the reason for such time consuming by  the
> socket.
>
> Thank you for your help
>
>
>
>
>
> Paul
>
>
>
>
>
>
>
>
>