You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@river.apache.org by Silas De Munck <si...@ua.ac.be> on 2010/09/17 13:28:14 UTC

Jini RMI vs TCP throughput

Hello,

In the context of a distributed discrete event simulation, I need to be able 
to send objects at a high rate between program instances on different hosts. 
Initially I implemented this using remote calls to an object instance living 
on the other side. This performs fairly well, but because essentially I only 
need a one-way stream-connection with asynchronous sending, I thought using 
the RMI mechanism for this was not the best solution. Therefore I added a TCP 
implementation using Sockets and ObjectStreams.

Now comparing the RMI and the socket implementation, the results are somewhat 
strange. I expected the sockets to perform better because of the reduced 
overhead, but this isn't the case. My (naive) socket implementation only 
reaches about 25% of the througput rate of the RMI implementation.

Does RMI use a different (faster) serialization implementation, compared to the 
serialization used with ObjectInput/Output-stream?
Are there any other differences that could explain the performance difference?

Any pointers on where to start investigate this issue would be very much 
appreciated.

Regards,

Silas




-- 

Silas De Munck
PhD Student

Computational Modeling and Programming (COMP)
University of Antwerp
Middelheimlaan 1
2020 Antwerpen, Belgium
G2.07, Department of Computer Science and Mathematics

e-mail: silas.demunck@ua.ac.be

Re: Jini RMI vs TCP throughput

Posted by Tom Hobbs <tv...@googlemail.com>.
If memory serves, it's best to make the buffer sizes as close to the TCP
payload packet size as possible.  Depending on your OS, there are some
settings you can tweak which will make your network sub-system take your
data and try to "fill up" the TCP packets (thus sending fewer, but larger,
packet), or send off a new packet with every 'chunk' of data.

Unless something very off is going on, I'd expect the socket approach to be
pretty quick (in relation to the RMI approach) without have to do any lower
level tweaking.

I'm a bit hazy on the details, so take the above with a decent pinch of
salt.



On Fri, Sep 17, 2010 at 2:56 PM, Silas De Munck <si...@ua.ac.be>wrote:

> On Friday 17 September 2010 14:22:43 Patrone, Dennis S. wrote:
> > Are you buffering your naïve streams? I not sure if RMI buffers
> internally,
> > but I suspect it does.  Depending on the size of the objects you are
> > serializing, buffering could make a big difference.
>
> I tried putting BufferedInput/OutputStreams around the socket streams, but
> this
> does not seem to make a difference.
>
> Would it be necessary to tweak the socket receive and send buffer sizes?
>
> Thanks,
>
> Silas
>
> >
> > -----Original Message-----
> > From: Tom Hobbs [mailto:tvhobbs@googlemail.com]
> > Sent: Friday, September 17, 2010 7:51 AM
> > To: river-user@incubator.apache.org
> > Subject: Re: Jini RMI vs TCP throughput
> >
> > Hi Silas,
> >
> > A brief look on the web throws a couple of links which suggest that RMI
> > should be *slower* that a Socket implementation.
> >
> > See; http://java.sun.com/developer/technicalArticles/ALT/sockets/
> >
> > I would suggest that the problem lies with your "(naive) socket
> > implementation".  As to what the problem could be, I'm afraid that I'm
> not
> > an expert in that area - especially without seeing any code.
> >
> > My personal approach might include;
> > - Decide that my RMI approach was fast enough and leave it alone
> > - Use something like Wireshark and test both approaches to see if it's
> the
> > transmission or the de/serialisation which is taking the time
> >
> > Have you considered re-asking this question on something like Stack
> > Overflow?
> >
> > Sorry I can't be more help.
> >
> > Tom
> >
> > On Fri, Sep 17, 2010 at 12:28 PM, Silas De Munck
> <si...@ua.ac.be>wrote:
> > > Hello,
> > >
> > > In the context of a distributed discrete event simulation, I need to be
> > > able
> > > to send objects at a high rate between program instances on different
> > > hosts.
> > > Initially I implemented this using remote calls to an object instance
> > > living
> > > on the other side. This performs fairly well, but because essentially I
> > > only
> > > need a one-way stream-connection with asynchronous sending, I thought
> > > using the RMI mechanism for this was not the best solution. Therefore I
> > > added a TCP
> > > implementation using Sockets and ObjectStreams.
> > >
> > > Now comparing the RMI and the socket implementation, the results are
> > > somewhat
> > > strange. I expected the sockets to perform better because of the
> reduced
> > > overhead, but this isn't the case. My (naive) socket implementation
> only
> > > reaches about 25% of the througput rate of the RMI implementation.
> > >
> > > Does RMI use a different (faster) serialization implementation,
> compared
> > > to the
> > > serialization used with ObjectInput/Output-stream?
> > > Are there any other differences that could explain the performance
> > > difference?
> > >
> > > Any pointers on where to start investigate this issue would be very
> much
> > > appreciated.
> > >
> > > Regards,
> > >
> > > Silas
> > >
> > >
> > >
> > >
> > > --
> > >
> > > Silas De Munck
> > > PhD Student
> > >
> > > Computational Modeling and Programming (COMP)
> > > University of Antwerp
> > > Middelheimlaan 1
> > > 2020 Antwerpen, Belgium
> > > G2.07, Department of Computer Science and Mathematics
> > >
> > > e-mail: silas.demunck@ua.ac.be
>
> --
>
> Silas De Munck
> PhD Student
>
> Computational Modeling and Programming (COMP)
> University of Antwerp
> Middelheimlaan 1
> 2020 Antwerpen, Belgium
> G2.07, Department of Computer Science and Mathematics
>
> e-mail: silas.demunck@ua.ac.be
>

Re: Jini RMI vs TCP throughput

Posted by Silas De Munck <si...@ua.ac.be>.
On Friday 17 September 2010 14:22:43 Patrone, Dennis S. wrote:
> Are you buffering your naïve streams? I not sure if RMI buffers internally,
> but I suspect it does.  Depending on the size of the objects you are
> serializing, buffering could make a big difference.

I tried putting BufferedInput/OutputStreams around the socket streams, but this 
does not seem to make a difference.

Would it be necessary to tweak the socket receive and send buffer sizes?

Thanks,

Silas

> 
> -----Original Message-----
> From: Tom Hobbs [mailto:tvhobbs@googlemail.com]
> Sent: Friday, September 17, 2010 7:51 AM
> To: river-user@incubator.apache.org
> Subject: Re: Jini RMI vs TCP throughput
> 
> Hi Silas,
> 
> A brief look on the web throws a couple of links which suggest that RMI
> should be *slower* that a Socket implementation.
> 
> See; http://java.sun.com/developer/technicalArticles/ALT/sockets/
> 
> I would suggest that the problem lies with your "(naive) socket
> implementation".  As to what the problem could be, I'm afraid that I'm not
> an expert in that area - especially without seeing any code.
> 
> My personal approach might include;
> - Decide that my RMI approach was fast enough and leave it alone
> - Use something like Wireshark and test both approaches to see if it's the
> transmission or the de/serialisation which is taking the time
> 
> Have you considered re-asking this question on something like Stack
> Overflow?
> 
> Sorry I can't be more help.
> 
> Tom
> 
> On Fri, Sep 17, 2010 at 12:28 PM, Silas De Munck 
<si...@ua.ac.be>wrote:
> > Hello,
> > 
> > In the context of a distributed discrete event simulation, I need to be
> > able
> > to send objects at a high rate between program instances on different
> > hosts.
> > Initially I implemented this using remote calls to an object instance
> > living
> > on the other side. This performs fairly well, but because essentially I
> > only
> > need a one-way stream-connection with asynchronous sending, I thought
> > using the RMI mechanism for this was not the best solution. Therefore I
> > added a TCP
> > implementation using Sockets and ObjectStreams.
> > 
> > Now comparing the RMI and the socket implementation, the results are
> > somewhat
> > strange. I expected the sockets to perform better because of the reduced
> > overhead, but this isn't the case. My (naive) socket implementation only
> > reaches about 25% of the througput rate of the RMI implementation.
> > 
> > Does RMI use a different (faster) serialization implementation, compared
> > to the
> > serialization used with ObjectInput/Output-stream?
> > Are there any other differences that could explain the performance
> > difference?
> > 
> > Any pointers on where to start investigate this issue would be very much
> > appreciated.
> > 
> > Regards,
> > 
> > Silas
> > 
> > 
> > 
> > 
> > --
> > 
> > Silas De Munck
> > PhD Student
> > 
> > Computational Modeling and Programming (COMP)
> > University of Antwerp
> > Middelheimlaan 1
> > 2020 Antwerpen, Belgium
> > G2.07, Department of Computer Science and Mathematics
> > 
> > e-mail: silas.demunck@ua.ac.be

-- 

Silas De Munck
PhD Student

Computational Modeling and Programming (COMP)
University of Antwerp
Middelheimlaan 1
2020 Antwerpen, Belgium
G2.07, Department of Computer Science and Mathematics

e-mail: silas.demunck@ua.ac.be

Re: Jini RMI vs TCP throughput

Posted by Alfredo Ramos <al...@rayamos.com>.
I have performed plenty of performance tests precisely for a case like
yours. And indeed RMI normally is slower because the use of standard java
serialization and the fact that RMI will perform a round trip for each
object send. So it is not ideal for streaming.

However I noticed that you mentioned using ObjectStreams (ObjectOutputStream
and ObjectInputStream I assume), which doesn't changes much in reality. You
are still using java serialization, just saving on the round trip per object
done by RMI.

I noticed that RMI can be slightly improved by sending batches of objects
(arrays) rather than one by one, but never as fast as Sockets with custom
serialization. As replacement of java serialization and ObjectStreams I
tried google protocol buffers (
http://code.google.com/apis/protocolbuffers/docs/overview.html), Thrift (
http://incubator.apache.org/thrift/) and Avro (http://avro.apache.org/).
They are all much faster, look for example the site:
http://wiki.github.com/eishay/jvm-serializers/ for benchmarks.

I ended up implementing my own serialization format, which is
not recommendable for everyone (maintenance, error prone, etc.), but I had
an 'special' situation and managed to outperform all of those options (CPU
utilization being the main problem for me). Obviously you need a good socket
implementation code as well, play with different buffer sizes.

Good luck,

Alfredo Ramos

On 17 September 2010 13:22, Patrone, Dennis S. <De...@jhuapl.edu>wrote:

> Are you buffering your naïve streams? I not sure if RMI buffers internally,
> but I suspect it does.  Depending on the size of the objects you are
> serializing, buffering could make a big difference.
>
> -----Original Message-----
> From: Tom Hobbs [mailto:tvhobbs@googlemail.com]
> Sent: Friday, September 17, 2010 7:51 AM
> To: river-user@incubator.apache.org
> Subject: Re: Jini RMI vs TCP throughput
>
> Hi Silas,
>
> A brief look on the web throws a couple of links which suggest that RMI
> should be *slower* that a Socket implementation.
>
> See; http://java.sun.com/developer/technicalArticles/ALT/sockets/
>
> I would suggest that the problem lies with your "(naive) socket
> implementation".  As to what the problem could be, I'm afraid that I'm not
> an expert in that area - especially without seeing any code.
>
> My personal approach might include;
> - Decide that my RMI approach was fast enough and leave it alone
> - Use something like Wireshark and test both approaches to see if it's the
> transmission or the de/serialisation which is taking the time
>
> Have you considered re-asking this question on something like Stack
> Overflow?
>
> Sorry I can't be more help.
>
> Tom
>
>
> On Fri, Sep 17, 2010 at 12:28 PM, Silas De Munck <silas.demunck@ua.ac.be
> >wrote:
>
> > Hello,
> >
> > In the context of a distributed discrete event simulation, I need to be
> > able
> > to send objects at a high rate between program instances on different
> > hosts.
> > Initially I implemented this using remote calls to an object instance
> > living
> > on the other side. This performs fairly well, but because essentially I
> > only
> > need a one-way stream-connection with asynchronous sending, I thought
> using
> > the RMI mechanism for this was not the best solution. Therefore I added a
> > TCP
> > implementation using Sockets and ObjectStreams.
> >
> > Now comparing the RMI and the socket implementation, the results are
> > somewhat
> > strange. I expected the sockets to perform better because of the reduced
> > overhead, but this isn't the case. My (naive) socket implementation only
> > reaches about 25% of the througput rate of the RMI implementation.
> >
> > Does RMI use a different (faster) serialization implementation, compared
> to
> > the
> > serialization used with ObjectInput/Output-stream?
> > Are there any other differences that could explain the performance
> > difference?
> >
> > Any pointers on where to start investigate this issue would be very much
> > appreciated.
> >
> > Regards,
> >
> > Silas
> >
> >
> >
> >
> > --
> >
> > Silas De Munck
> > PhD Student
> >
> > Computational Modeling and Programming (COMP)
> > University of Antwerp
> > Middelheimlaan 1
> > 2020 Antwerpen, Belgium
> > G2.07, Department of Computer Science and Mathematics
> >
> > e-mail: silas.demunck@ua.ac.be
> >
>

RE: Jini RMI vs TCP throughput

Posted by "Patrone, Dennis S." <De...@jhuapl.edu>.
Are you buffering your naïve streams? I not sure if RMI buffers internally, but I suspect it does.  Depending on the size of the objects you are serializing, buffering could make a big difference.

-----Original Message-----
From: Tom Hobbs [mailto:tvhobbs@googlemail.com] 
Sent: Friday, September 17, 2010 7:51 AM
To: river-user@incubator.apache.org
Subject: Re: Jini RMI vs TCP throughput

Hi Silas,

A brief look on the web throws a couple of links which suggest that RMI
should be *slower* that a Socket implementation.

See; http://java.sun.com/developer/technicalArticles/ALT/sockets/

I would suggest that the problem lies with your "(naive) socket
implementation".  As to what the problem could be, I'm afraid that I'm not
an expert in that area - especially without seeing any code.

My personal approach might include;
- Decide that my RMI approach was fast enough and leave it alone
- Use something like Wireshark and test both approaches to see if it's the
transmission or the de/serialisation which is taking the time

Have you considered re-asking this question on something like Stack
Overflow?

Sorry I can't be more help.

Tom


On Fri, Sep 17, 2010 at 12:28 PM, Silas De Munck <si...@ua.ac.be>wrote:

> Hello,
>
> In the context of a distributed discrete event simulation, I need to be
> able
> to send objects at a high rate between program instances on different
> hosts.
> Initially I implemented this using remote calls to an object instance
> living
> on the other side. This performs fairly well, but because essentially I
> only
> need a one-way stream-connection with asynchronous sending, I thought using
> the RMI mechanism for this was not the best solution. Therefore I added a
> TCP
> implementation using Sockets and ObjectStreams.
>
> Now comparing the RMI and the socket implementation, the results are
> somewhat
> strange. I expected the sockets to perform better because of the reduced
> overhead, but this isn't the case. My (naive) socket implementation only
> reaches about 25% of the througput rate of the RMI implementation.
>
> Does RMI use a different (faster) serialization implementation, compared to
> the
> serialization used with ObjectInput/Output-stream?
> Are there any other differences that could explain the performance
> difference?
>
> Any pointers on where to start investigate this issue would be very much
> appreciated.
>
> Regards,
>
> Silas
>
>
>
>
> --
>
> Silas De Munck
> PhD Student
>
> Computational Modeling and Programming (COMP)
> University of Antwerp
> Middelheimlaan 1
> 2020 Antwerpen, Belgium
> G2.07, Department of Computer Science and Mathematics
>
> e-mail: silas.demunck@ua.ac.be
>

Re: Jini RMI vs TCP throughput

Posted by Silas De Munck <si...@ua.ac.be>.
Hi,

> 
> A brief look on the web throws a couple of links which suggest that RMI
> should be *slower* that a Socket implementation.
> 
> See; http://java.sun.com/developer/technicalArticles/ALT/sockets/
> 
> I would suggest that the problem lies with your "(naive) socket
> implementation".  As to what the problem could be, I'm afraid that I'm not
> an expert in that area - especially without seeing any code.

Well, maybe my implementation isn't realy naive. It's very much like the 
example in this article. I just send my objects througg an objectoutputstream 
over the socketoutputstream to the receiver. At the receiver side, there is a 
thread looping over a readObject call of an objectInputStream, dumping the 
received objects in a LinkedBlockingQueue.

> 
> My personal approach might include;
> - Decide that my RMI approach was fast enough and leave it alone
Well, the RMI approach is fast, but in theory the TCP approach should be even 
better.

> - Use something like Wireshark and test both approaches to see if it's the
> transmission or the de/serialisation which is taking the time

The only difference I can see at the moment is that the data is sent in short 
bursts whereas the flow is more constant when using RMI.

Would converting the code to use NIO make a big difference?


> 
> Have you considered re-asking this question on something like Stack
> Overflow?

I'll try that.

> 
> Sorry I can't be more help.
> 

Thanks anyway.

Regards,

Silas


> On Fri, Sep 17, 2010 at 12:28 PM, Silas De Munck 
<si...@ua.ac.be>wrote:
> > Hello,
> > 
> > In the context of a distributed discrete event simulation, I need to be
> > able
> > to send objects at a high rate between program instances on different
> > hosts.
> > Initially I implemented this using remote calls to an object instance
> > living
> > on the other side. This performs fairly well, but because essentially I
> > only
> > need a one-way stream-connection with asynchronous sending, I thought
> > using the RMI mechanism for this was not the best solution. Therefore I
> > added a TCP
> > implementation using Sockets and ObjectStreams.
> > 
> > Now comparing the RMI and the socket implementation, the results are
> > somewhat
> > strange. I expected the sockets to perform better because of the reduced
> > overhead, but this isn't the case. My (naive) socket implementation only
> > reaches about 25% of the througput rate of the RMI implementation.
> > 
> > Does RMI use a different (faster) serialization implementation, compared
> > to the
> > serialization used with ObjectInput/Output-stream?
> > Are there any other differences that could explain the performance
> > difference?
> > 
> > Any pointers on where to start investigate this issue would be very much
> > appreciated.
> > 
> > Regards,
> > 
> > Silas
> > 
> > 
> > 
> > 
> > --
> > 
> > Silas De Munck
> > PhD Student
> > 
> > Computational Modeling and Programming (COMP)
> > University of Antwerp
> > Middelheimlaan 1
> > 2020 Antwerpen, Belgium
> > G2.07, Department of Computer Science and Mathematics
> > 
> > e-mail: silas.demunck@ua.ac.be

-- 

Silas De Munck
PhD Student

Computational Modeling and Programming (COMP)
University of Antwerp
Middelheimlaan 1
2020 Antwerpen, Belgium
G2.07, Department of Computer Science and Mathematics

e-mail: silas.demunck@ua.ac.be

Re: Jini RMI vs TCP throughput

Posted by Tom Hobbs <tv...@googlemail.com>.
Hi Silas,

A brief look on the web throws a couple of links which suggest that RMI
should be *slower* that a Socket implementation.

See; http://java.sun.com/developer/technicalArticles/ALT/sockets/

I would suggest that the problem lies with your "(naive) socket
implementation".  As to what the problem could be, I'm afraid that I'm not
an expert in that area - especially without seeing any code.

My personal approach might include;
- Decide that my RMI approach was fast enough and leave it alone
- Use something like Wireshark and test both approaches to see if it's the
transmission or the de/serialisation which is taking the time

Have you considered re-asking this question on something like Stack
Overflow?

Sorry I can't be more help.

Tom


On Fri, Sep 17, 2010 at 12:28 PM, Silas De Munck <si...@ua.ac.be>wrote:

> Hello,
>
> In the context of a distributed discrete event simulation, I need to be
> able
> to send objects at a high rate between program instances on different
> hosts.
> Initially I implemented this using remote calls to an object instance
> living
> on the other side. This performs fairly well, but because essentially I
> only
> need a one-way stream-connection with asynchronous sending, I thought using
> the RMI mechanism for this was not the best solution. Therefore I added a
> TCP
> implementation using Sockets and ObjectStreams.
>
> Now comparing the RMI and the socket implementation, the results are
> somewhat
> strange. I expected the sockets to perform better because of the reduced
> overhead, but this isn't the case. My (naive) socket implementation only
> reaches about 25% of the througput rate of the RMI implementation.
>
> Does RMI use a different (faster) serialization implementation, compared to
> the
> serialization used with ObjectInput/Output-stream?
> Are there any other differences that could explain the performance
> difference?
>
> Any pointers on where to start investigate this issue would be very much
> appreciated.
>
> Regards,
>
> Silas
>
>
>
>
> --
>
> Silas De Munck
> PhD Student
>
> Computational Modeling and Programming (COMP)
> University of Antwerp
> Middelheimlaan 1
> 2020 Antwerpen, Belgium
> G2.07, Department of Computer Science and Mathematics
>
> e-mail: silas.demunck@ua.ac.be
>

Re: Jini RMI vs TCP throughput

Posted by Silas De Munck <si...@ua.ac.be>.
I did some more testing with a small example having the same structure as the 
communication layer of my application. 

Results with this test are: 
about 1MB/s with Jini+RMI data rate and about 1,3MB/s with a Tcp stream 
connection, which is more what you would expect...

So I think the connection itself is fine and I'm suspecting other components to 
cause the slowdown in combination with a TCP connection.

Regards,

Silas

On Monday 20 September 2010 09:24:15 Silas De Munck wrote:
> Yes I did set the TcpNoDelay to true.
> 
> I also played a bit with the buffer sizes and it doesn't seem to make a
> difference.
> 
> As this is a siginificant code base (a distributed discrete event
> simulator), it's not easy to show you some code, but I'll try to create a
> small example showing my problem.
> 
> Thanks for your help already!
> 
> Silas
> 
> On Friday 17 September 2010 20:19:35 Tim Blackman wrote:
> > Did you call Socket.setTcpNoDelay with true?  This buffeting has a big
> > impact if you didn't.
> > 
> > NIO probably won't help -- the JDK uses it underneath anyway.
> > 
> > Also standard RMI is much faster than JERI in my experience.
> > 
> > - Tim
> > 
> > On Sep 17, 2010, at 7:28 AM, Silas De Munck <si...@ua.ac.be> 
wrote:
> > > Hello,
> > > 
> > > In the context of a distributed discrete event simulation, I need to be
> > > able to send objects at a high rate between program instances on
> > > different hosts. Initially I implemented this using remote calls to an
> > > object instance living on the other side. This performs fairly well,
> > > but because essentially I only need a one-way stream-connection with
> > > asynchronous sending, I thought using the RMI mechanism for this was
> > > not the best solution. Therefore I added a TCP implementation using
> > > Sockets and ObjectStreams.
> > > 
> > > Now comparing the RMI and the socket implementation, the results are
> > > somewhat strange. I expected the sockets to perform better because of
> > > the reduced overhead, but this isn't the case. My (naive) socket
> > > implementation only reaches about 25% of the througput rate of the RMI
> > > implementation.
> > > 
> > > Does RMI use a different (faster) serialization implementation,
> > > compared to the serialization used with ObjectInput/Output-stream?
> > > Are there any other differences that could explain the performance
> > > difference?
> > > 
> > > Any pointers on where to start investigate this issue would be very
> > > much appreciated.
> > > 
> > > Regards,
> > > 
> > > Silas

-- 

Silas De Munck
PhD Student

Computational Modeling and Programming (COMP)
University of Antwerp
Middelheimlaan 1
2020 Antwerpen, Belgium
G2.07, Department of Computer Science and Mathematics

e-mail: silas.demunck@ua.ac.be

Re: Jini RMI vs TCP throughput

Posted by Silas De Munck <si...@ua.ac.be>.
Yes I did set the TcpNoDelay to true.

I also played a bit with the buffer sizes and it doesn't seem to make a 
difference.

As this is a siginificant code base (a distributed discrete event simulator), 
it's not easy to show you some code, but I'll try to create a small example 
showing my problem.

Thanks for your help already!

Silas


On Friday 17 September 2010 20:19:35 Tim Blackman wrote:
> Did you call Socket.setTcpNoDelay with true?  This buffeting has a big
> impact if you didn't.
> 
> NIO probably won't help -- the JDK uses it underneath anyway.
> 
> Also standard RMI is much faster than JERI in my experience.
> 
> - Tim
> 
> On Sep 17, 2010, at 7:28 AM, Silas De Munck <si...@ua.ac.be> wrote:
> > Hello,
> > 
> > In the context of a distributed discrete event simulation, I need to be
> > able to send objects at a high rate between program instances on
> > different hosts. Initially I implemented this using remote calls to an
> > object instance living on the other side. This performs fairly well, but
> > because essentially I only need a one-way stream-connection with
> > asynchronous sending, I thought using the RMI mechanism for this was not
> > the best solution. Therefore I added a TCP implementation using Sockets
> > and ObjectStreams.
> > 
> > Now comparing the RMI and the socket implementation, the results are
> > somewhat strange. I expected the sockets to perform better because of
> > the reduced overhead, but this isn't the case. My (naive) socket
> > implementation only reaches about 25% of the througput rate of the RMI
> > implementation.
> > 
> > Does RMI use a different (faster) serialization implementation, compared
> > to the serialization used with ObjectInput/Output-stream?
> > Are there any other differences that could explain the performance
> > difference?
> > 
> > Any pointers on where to start investigate this issue would be very much
> > appreciated.
> > 
> > Regards,
> > 
> > Silas

-- 

Silas De Munck
PhD Student

Computational Modeling and Programming (COMP)
University of Antwerp
Middelheimlaan 1
2020 Antwerpen, Belgium
G2.07, Department of Computer Science and Mathematics

e-mail: silas.demunck@ua.ac.be

RE: Jini RMI vs TCP throughput

Posted by Christopher Dolan <ch...@avid.com>.
But that would just improve latency not throughput, right?  I can't see
how that could possibly solve a 75% performance hit.  Silas' problem is
a real mystery to me.  I don't believe we're going to get to the bottom
of this unless we can see some code that reproduces the slowness.

Chris

-----Original Message-----
From: Tim Blackman [mailto:tim.blackman@gmail.com] 
Sent: Friday, September 17, 2010 1:20 PM
To: river-user@incubator.apache.org
Cc: river-user@incubator.apache.org
Subject: Re: Jini RMI vs TCP throughput

Did you call Socket.setTcpNoDelay with true?  This buffeting has a big
impact if you didn't.

NIO probably won't help -- the JDK uses it underneath anyway.

Also standard RMI is much faster than JERI in my experience.

- Tim

On Sep 17, 2010, at 7:28 AM, Silas De Munck <si...@ua.ac.be>
wrote:

> Hello,
> 
> In the context of a distributed discrete event simulation, I need to
be able 
> to send objects at a high rate between program instances on different
hosts. 
> Initially I implemented this using remote calls to an object instance
living 
> on the other side. This performs fairly well, but because essentially
I only 
> need a one-way stream-connection with asynchronous sending, I thought
using 
> the RMI mechanism for this was not the best solution. Therefore I
added a TCP 
> implementation using Sockets and ObjectStreams.
> 
> Now comparing the RMI and the socket implementation, the results are
somewhat 
> strange. I expected the sockets to perform better because of the
reduced 
> overhead, but this isn't the case. My (naive) socket implementation
only 
> reaches about 25% of the througput rate of the RMI implementation.
> 
> Does RMI use a different (faster) serialization implementation,
compared to the 
> serialization used with ObjectInput/Output-stream?
> Are there any other differences that could explain the performance
difference?
> 
> Any pointers on where to start investigate this issue would be very
much 
> appreciated.
> 
> Regards,
> 
> Silas
> 
> 
> 
> 
> -- 
> 
> Silas De Munck
> PhD Student
> 
> Computational Modeling and Programming (COMP)
> University of Antwerp
> Middelheimlaan 1
> 2020 Antwerpen, Belgium
> G2.07, Department of Computer Science and Mathematics
> 
> e-mail: silas.demunck@ua.ac.be

Re: Jini RMI vs TCP throughput

Posted by Tim Blackman <ti...@gmail.com>.
Did you call Socket.setTcpNoDelay with true?  This buffeting has a big impact if you didn't.

NIO probably won't help -- the JDK uses it underneath anyway.

Also standard RMI is much faster than JERI in my experience.

- Tim

On Sep 17, 2010, at 7:28 AM, Silas De Munck <si...@ua.ac.be> wrote:

> Hello,
> 
> In the context of a distributed discrete event simulation, I need to be able 
> to send objects at a high rate between program instances on different hosts. 
> Initially I implemented this using remote calls to an object instance living 
> on the other side. This performs fairly well, but because essentially I only 
> need a one-way stream-connection with asynchronous sending, I thought using 
> the RMI mechanism for this was not the best solution. Therefore I added a TCP 
> implementation using Sockets and ObjectStreams.
> 
> Now comparing the RMI and the socket implementation, the results are somewhat 
> strange. I expected the sockets to perform better because of the reduced 
> overhead, but this isn't the case. My (naive) socket implementation only 
> reaches about 25% of the througput rate of the RMI implementation.
> 
> Does RMI use a different (faster) serialization implementation, compared to the 
> serialization used with ObjectInput/Output-stream?
> Are there any other differences that could explain the performance difference?
> 
> Any pointers on where to start investigate this issue would be very much 
> appreciated.
> 
> Regards,
> 
> Silas
> 
> 
> 
> 
> -- 
> 
> Silas De Munck
> PhD Student
> 
> Computational Modeling and Programming (COMP)
> University of Antwerp
> Middelheimlaan 1
> 2020 Antwerpen, Belgium
> G2.07, Department of Computer Science and Mathematics
> 
> e-mail: silas.demunck@ua.ac.be

Re: Jini RMI vs TCP throughput -> SOLVED

Posted by Silas De Munck <si...@ua.ac.be>.
Hi,

Sorry for wasting your time... 

The problem I had was due to a waiting thread on the receiving side that was 
only processing events after a timeout. An incoming object from the stream 
should have woken it up imediately instead.

Thank you all for your help, it helped me eliminate a lot of possible causes.

On Friday 17 September 2010 13:28:14 Silas De Munck wrote:
> Hello,
> 
> In the context of a distributed discrete event simulation, I need to be
> able to send objects at a high rate between program instances on different
> hosts. Initially I implemented this using remote calls to an object
> instance living on the other side. This performs fairly well, but because
> essentially I only need a one-way stream-connection with asynchronous
> sending, I thought using the RMI mechanism for this was not the best
> solution. Therefore I added a TCP implementation using Sockets and
> ObjectStreams.
> 
> Now comparing the RMI and the socket implementation, the results are
> somewhat strange. I expected the sockets to perform better because of the
> reduced overhead, but this isn't the case. My (naive) socket
> implementation only reaches about 25% of the througput rate of the RMI
> implementation.
> 
> Does RMI use a different (faster) serialization implementation, compared to
> the serialization used with ObjectInput/Output-stream?
> Are there any other differences that could explain the performance
> difference?
> 
> Any pointers on where to start investigate this issue would be very much
> appreciated.
> 
> Regards,
> 
> Silas

-- 

Silas De Munck
PhD Student

Computational Modeling and Programming (COMP)
University of Antwerp
Middelheimlaan 1
2020 Antwerpen, Belgium
G2.07, Department of Computer Science and Mathematics

e-mail: silas.demunck@ua.ac.be

Re: Jini RMI vs TCP throughput

Posted by Silas De Munck <si...@ua.ac.be>.
The objects I send are of course not all primitive types. So I thought this 
unShared-thing would make a big difference, but it doesn't. This is probably 
because I only test it with a small and relatively short run (+- 120sec).

It seems strange to me that this issue isn't mentioned more visible in the 
docs as it could cause a lot of problems...

Regards,
Silas

On Friday 17 September 2010 17:23:18 Patrone, Dennis S. wrote:
> >> The objects I send from one side to the other are always "new" so
> >> there should
> >> be no difference in using writeUnshared and writeObject...
> 
> There probably is some time cost (albeit small) associated with writeObject
> managing the back-references.  More importantly, using writeUnshared will
> save you from a significant memory problem.  If you use writeObject and
> your objects are many and all new, you will need to periodically call
> reset() (or close/re-open the object stream) or you will eventually run
> out of memory.
> 
> > This can only be true if the class consists of primitive types (so no
> > Strings). Is it?
> 
> If I understand your question, then the answer is no.  There still will be
> a difference even if the object is just primitives, because the primitive
> values are not written to the stream using writeObject after the first
> write (where they will be with writeUnshared).  It's the top-level object
> (the one containing all of the primitives) that is cached using
> writeObject, not (only) the internal objects.

-- 

Silas De Munck
PhD Student

Computational Modeling and Programming (COMP)
University of Antwerp
Middelheimlaan 1
2020 Antwerpen, Belgium
G2.07, Department of Computer Science and Mathematics

e-mail: silas.demunck@ua.ac.be

RE: Jini RMI vs TCP throughput

Posted by "Patrone, Dennis S." <De...@jhuapl.edu>.
>> The objects I send from one side to the other are always "new" so
>> there should
>> be no difference in using writeUnshared and writeObject...


There probably is some time cost (albeit small) associated with writeObject managing the back-references.  More importantly, using writeUnshared will save you from a significant memory problem.  If you use writeObject and your objects are many and all new, you will need to periodically call reset() (or close/re-open the object stream) or you will eventually run out of memory.

>
> This can only be true if the class consists of primitive types (so no
> Strings). Is it?

If I understand your question, then the answer is no.  There still will be a difference even if the object is just primitives, because the primitive values are not written to the stream using writeObject after the first write (where they will be with writeUnshared).  It's the top-level object (the one containing all of the primitives) that is cached using writeObject, not (only) the internal objects.  





Re: Jini RMI vs TCP throughput

Posted by Sim IJskes - QCG <si...@qcg.nl>.
On 09/17/2010 05:06 PM, Sim IJskes - QCG wrote:
> On 09/17/2010 04:41 PM, Silas De Munck wrote:
>> On Friday 17 September 2010 16:32:26 Sim IJskes - QCG wrote:
>>> On 09/17/2010 04:28 PM, Sim IJskes - QCG wrote:
>>>> On 09/17/2010 01:28 PM, Silas De Munck wrote:
>>>>> the RMI mechanism for this was not the best solution. Therefore I
>>>>> added a TCP
>>>>> implementation using Sockets and ObjectStreams.
>>>>
>>>> How often do you (re-)wrap the socket streams in ObjectStreams?
>>>>
>>>> The (ReplaceTable) cache in ObjectStream may optimize your marshalling.
>>>> And if you recreate the ObjectStream the ReplaceTable will be
>>>> recreated.
>>>
>>> And the HandleTable.
>>>
>>> i.e. the difference in performance between writeUnshared and
>>> writeObject.
>>>
>>> Gr. Sim
>>
>> I only wrap the steam once at connection set-up...
>
> Ok, and only 1 connection per run?
>
>> The objects I send from one side to the other are always "new" so
>> there should
>> be no difference in using writeUnshared and writeObject...
>
> This can only be true if the class consists of primitive types (so no
> Strings). Is it?

of _only_ primitive types.


Re: Jini RMI vs TCP throughput

Posted by Sim IJskes - QCG <si...@qcg.nl>.
On 09/17/2010 04:41 PM, Silas De Munck wrote:
> On Friday 17 September 2010 16:32:26 Sim IJskes - QCG wrote:
>> On 09/17/2010 04:28 PM, Sim IJskes - QCG wrote:
>>> On 09/17/2010 01:28 PM, Silas De Munck wrote:
>>>> the RMI mechanism for this was not the best solution. Therefore I
>>>> added a TCP
>>>> implementation using Sockets and ObjectStreams.
>>>
>>> How often do you (re-)wrap the socket streams in ObjectStreams?
>>>
>>> The (ReplaceTable) cache in ObjectStream may optimize your marshalling.
>>> And if you recreate the ObjectStream the ReplaceTable will be recreated.
>>
>> And the HandleTable.
>>
>> i.e. the difference in performance between writeUnshared and writeObject.
>>
>> Gr. Sim
>
> I only wrap the steam once at connection set-up...

Ok, and only 1 connection per run?

> The objects I send from one side to the other are always "new" so there should
> be no difference in using writeUnshared and writeObject...

This can only be true if the class consists of primitive types (so no 
Strings). Is it?

Gr,, Sim

Re: Jini RMI vs TCP throughput

Posted by Silas De Munck <si...@ua.ac.be>.
On Friday 17 September 2010 16:32:26 Sim IJskes - QCG wrote:
> On 09/17/2010 04:28 PM, Sim IJskes - QCG wrote:
> > On 09/17/2010 01:28 PM, Silas De Munck wrote:
> >> the RMI mechanism for this was not the best solution. Therefore I
> >> added a TCP
> >> implementation using Sockets and ObjectStreams.
> > 
> > How often do you (re-)wrap the socket streams in ObjectStreams?
> > 
> > The (ReplaceTable) cache in ObjectStream may optimize your marshalling.
> > And if you recreate the ObjectStream the ReplaceTable will be recreated.
> 
> And the HandleTable.
> 
> i.e. the difference in performance between writeUnshared and writeObject.
> 
> Gr. Sim

I only wrap the steam once at connection set-up...

The objects I send from one side to the other are always "new" so there should 
be no difference in using writeUnshared and writeObject...


Gr.
Silas

-- 

Silas De Munck
PhD Student

Computational Modeling and Programming (COMP)
University of Antwerp
Middelheimlaan 1
2020 Antwerpen, Belgium
G2.07, Department of Computer Science and Mathematics

e-mail: silas.demunck@ua.ac.be

Re: Jini RMI vs TCP throughput

Posted by Sim IJskes - QCG <si...@qcg.nl>.
On 09/17/2010 04:28 PM, Sim IJskes - QCG wrote:
> On 09/17/2010 01:28 PM, Silas De Munck wrote:
>> the RMI mechanism for this was not the best solution. Therefore I
>> added a TCP
>> implementation using Sockets and ObjectStreams.
>
> How often do you (re-)wrap the socket streams in ObjectStreams?
>
> The (ReplaceTable) cache in ObjectStream may optimize your marshalling.
> And if you recreate the ObjectStream the ReplaceTable will be recreated.

And the HandleTable.

i.e. the difference in performance between writeUnshared and writeObject.

Gr. Sim


Re: Jini RMI vs TCP throughput

Posted by Sim IJskes - QCG <si...@qcg.nl>.
On 09/17/2010 01:28 PM, Silas De Munck wrote:
> the RMI mechanism for this was not the best solution. Therefore I added a TCP
> implementation using Sockets and ObjectStreams.

How often do you (re-)wrap the socket streams in ObjectStreams?

The (ReplaceTable) cache in ObjectStream may optimize your marshalling. 
And if you recreate the ObjectStream the ReplaceTable will be recreated.

Gr. Sim