You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mina.apache.org by Mark Webb <el...@gmail.com> on 2008/03/11 18:28:33 UTC
Re: Session creation overhead (Was: Re: performance of NioDatagramConnector.connect())
Trustin,
Could you provide more information on the way in which you test and
profile MINA. I would like to set up a test environment, but am
curious as to how others set theirs up. As for the profiler, I use
YourKit and have been really happy with it.
The only comment I have is for #2. Could we let the user pass in
their own Queue implementation? This is done in different
programs/API's I have seen. Case in point, ThreadPoolExecutor; it
allows the user to pass in an instance of a BlockingQueue.
Just my $.02
Mark
On Tue, Mar 11, 2008 at 10:11 AM, 이희승 (Trustin Lee) <tr...@gmail.com> wrote:
> 1a) has been resolved:
>
> https://issues.apache.org/jira/browse/DIRMINA-546
>
> Performance has been significantly improved for short-living
> connections.
>
> I didn't resolve '1b)' because these getter calls are essential in most
> case. For example, they are usually used for logging.
>
> 3) also has been resolved:
>
> https://issues.apache.org/jira/browse/DIRMINA-547
>
> Now frequent connection attempt shouldn't cause any overhead comparing
> to blocking I/O.
>
> I'm still not sure how much we will gain by fixing 2) and 4), so I
> didn't fix them yet.
>
> 2008-01-17 (목), 12:26 +0900, Trustin Lee 쓰시길:
>
>
> > I did some profiling and found some bottlenecks (in the order of
> > importance both in terms of API stability and performance):
> >
> > 1) Too many system calls on session creation.
> > 1a) MINA calls all Socket.setProperty() methods even if they are all
> > same with the default values.
> > - We need to change how the configuration works. For example, we
> > could use non-primitive types such as Integer to allow null, which
> > means default.
> > 1b) MINA calls Socket.getProperty() methods immediately on session
> > creation (e.g. Socket.getLocalAddress())
> > - Lazy initialization?
> >
> > 2) ConcurrentLinkedQueue
> > - It performs bad comparing to synchronized CircularQueue when the
> > number of accessing threads are very small. We could allow a user to
> > change the queue implementation for each operation (e.g. accepting a
> > new session and write request).
> >
> > 3) IdleSessionChecker.addService() and removeService()
> > - It creates and destroys a thread too often when there's only one
> > connection. We could refactor it so IdleSessionChecker is not a
> > singleton and the service (e.g. DatagramConnector) can control its
> > life cycle. It will be a daemon thread anyway just in case a user
> > forgot to call dispose().
> >
> > 4) ThreadPoolExecutor.execute()
> > - Even if ThreadPoolExecutor is used to minimize the overhead of
> > thread creation, ThreadPoolExecutor.execute() has some inevitable
> > overhead comparing to direct system call.
> >
> > 5) Readiness selection model (NIO / epoll) itself
> > - It's non-blocking and requires an additional signaling and
> > notification between two threads. It causes inevitable latency which
> > looks relatively big when the number of managed connections is small.
> > I think we don't need to fix this anyways.
> >
> > Fixing #1-3 is not that difficult but need some changes in the API.
> > I'd like to get some feed back before I proceed.
> >
> > I don't have specific solution for #4; we need more investigation if
> > it's really big overhead. Probably we could measure again after
> > fixing #1-3.
> >
> > Thanks for the feed back in advance,
> > Trustin
> >
> > On Jan 9, 2008 3:19 AM, Mike Heath <mh...@apache.org> wrote:
> > > Thank you for the benchmarks. This is very valuable information.
> > > Unfortunately, we haven't done a lot of performance turning on MINA UDP.
> > > This is something that we should address in MINA 2.0. Wilson, would
> > > you please log a JIRA issue with your benchmarks so that we can schedule
> > > time to work on this?
> > >
> > > -Mike
> > >
> > >
> > > Wilson Yeung wrote:
> > > > I benchmarked Mina 2.0's NioDatagramConnector vs java.net.DatagramSocket on a
> > > > Linux 2.6 kernel.
> > > >
> > > > Mina 2.0 NioDatagramConnector, connect(), future.addListener(),
> > > > session.close()
> > > > 100,000 iterations
> > > > ~20 seconds
> > > > ~5,000 per second
> > > >
> > > > java.net.DatagramSocket, connect(), disconnect(), close()
> > > > 100,000 iterations
> > > > ~2-3 seconds
> > > > ~30,000 to 50,000 per second
> > > >
> > > >
> > >
> > >
> >
> >
> >
> --
> Trustin Lee - Principal Software Engineer, JBoss, Red Hat
> --
>
>
> what we call human nature is actually human habit
> --
> http://gleamynode.net/
>
--
--------------------------------
Talent hits a target no one else can hit; Genius hits a target no one
else can see.
Re: Session creation overhead (Was: Re: performance of NioDatagramConnector.connect())
Posted by Mark Webb <el...@gmail.com>.
sounds simple enough. Thanks for the info.
On Tue, Mar 11, 2008 at 11:25 PM, 이희승 (Trustin Lee) <tr...@gmail.com> wrote:
> I just wrote a very simple JUnit test case that measures the amount of
> time taken to connect and disconnect from MINA server. It takes only a
> few minutes to write it, and the improvement was obvious (about 2~3
> times faster)
>
> I used YourKit profiler to track this problem, and I compared MINA with
> simple blocking I/O server to see the difference. The difference was
> mostly related with getting/setting socket parameters and thread
> creation. However, YourKit didn't tell me about the difference
> directly, so I have to crawl around the call tree to find the
> bottleneck.
>
> For #2, we could refactor IoSessionDataStructureFactory to add such a
> factory method and make it all-in-one factory for such data structures.
>
> 2008-03-11 (화), 13:28 -0400, Mark Webb 쓰시길:
>
>
> > Trustin,
> >
> > Could you provide more information on the way in which you test and
> > profile MINA. I would like to set up a test environment, but am
> > curious as to how others set theirs up. As for the profiler, I use
> > YourKit and have been really happy with it.
> >
> > The only comment I have is for #2. Could we let the user pass in
> > their own Queue implementation? This is done in different
> > programs/API's I have seen. Case in point, ThreadPoolExecutor; it
> > allows the user to pass in an instance of a BlockingQueue.
> >
> > Just my $.02
> >
> > Mark
> >
> > On Tue, Mar 11, 2008 at 10:11 AM, 이희승 (Trustin Lee) <tr...@gmail.com> wrote:
> > > 1a) has been resolved:
> > >
> > > https://issues.apache.org/jira/browse/DIRMINA-546
> > >
> > > Performance has been significantly improved for short-living
> > > connections.
> > >
> > > I didn't resolve '1b)' because these getter calls are essential in most
> > > case. For example, they are usually used for logging.
> > >
> > > 3) also has been resolved:
> > >
> > > https://issues.apache.org/jira/browse/DIRMINA-547
> > >
> > > Now frequent connection attempt shouldn't cause any overhead comparing
> > > to blocking I/O.
> > >
> > > I'm still not sure how much we will gain by fixing 2) and 4), so I
> > > didn't fix them yet.
> > >
> > > 2008-01-17 (목), 12:26 +0900, Trustin Lee 쓰시길:
> > >
> > >
> > > > I did some profiling and found some bottlenecks (in the order of
> > > > importance both in terms of API stability and performance):
> > > >
> > > > 1) Too many system calls on session creation.
> > > > 1a) MINA calls all Socket.setProperty() methods even if they are all
> > > > same with the default values.
> > > > - We need to change how the configuration works. For example, we
> > > > could use non-primitive types such as Integer to allow null, which
> > > > means default.
> > > > 1b) MINA calls Socket.getProperty() methods immediately on session
> > > > creation (e.g. Socket.getLocalAddress())
> > > > - Lazy initialization?
> > > >
> > > > 2) ConcurrentLinkedQueue
> > > > - It performs bad comparing to synchronized CircularQueue when the
> > > > number of accessing threads are very small. We could allow a user to
> > > > change the queue implementation for each operation (e.g. accepting a
> > > > new session and write request).
> > > >
> > > > 3) IdleSessionChecker.addService() and removeService()
> > > > - It creates and destroys a thread too often when there's only one
> > > > connection. We could refactor it so IdleSessionChecker is not a
> > > > singleton and the service (e.g. DatagramConnector) can control its
> > > > life cycle. It will be a daemon thread anyway just in case a user
> > > > forgot to call dispose().
> > > >
> > > > 4) ThreadPoolExecutor.execute()
> > > > - Even if ThreadPoolExecutor is used to minimize the overhead of
> > > > thread creation, ThreadPoolExecutor.execute() has some inevitable
> > > > overhead comparing to direct system call.
> > > >
> > > > 5) Readiness selection model (NIO / epoll) itself
> > > > - It's non-blocking and requires an additional signaling and
> > > > notification between two threads. It causes inevitable latency which
> > > > looks relatively big when the number of managed connections is small.
> > > > I think we don't need to fix this anyways.
> > > >
> > > > Fixing #1-3 is not that difficult but need some changes in the API.
> > > > I'd like to get some feed back before I proceed.
> > > >
> > > > I don't have specific solution for #4; we need more investigation if
> > > > it's really big overhead. Probably we could measure again after
> > > > fixing #1-3.
> > > >
> > > > Thanks for the feed back in advance,
> > > > Trustin
> > > >
> > > > On Jan 9, 2008 3:19 AM, Mike Heath <mh...@apache.org> wrote:
> > > > > Thank you for the benchmarks. This is very valuable information.
> > > > > Unfortunately, we haven't done a lot of performance turning on MINA UDP.
> > > > > This is something that we should address in MINA 2.0. Wilson, would
> > > > > you please log a JIRA issue with your benchmarks so that we can schedule
> > > > > time to work on this?
> > > > >
> > > > > -Mike
> > > > >
> > > > >
> > > > > Wilson Yeung wrote:
> > > > > > I benchmarked Mina 2.0's NioDatagramConnector vs java.net.DatagramSocket on a
> > > > > > Linux 2.6 kernel.
> > > > > >
> > > > > > Mina 2.0 NioDatagramConnector, connect(), future.addListener(),
> > > > > > session.close()
> > > > > > 100,000 iterations
> > > > > > ~20 seconds
> > > > > > ~5,000 per second
> > > > > >
> > > > > > java.net.DatagramSocket, connect(), disconnect(), close()
> > > > > > 100,000 iterations
> > > > > > ~2-3 seconds
> > > > > > ~30,000 to 50,000 per second
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > > --
> > > Trustin Lee - Principal Software Engineer, JBoss, Red Hat
> > > --
> > >
> > >
> > > what we call human nature is actually human habit
> > > --
> > > http://gleamynode.net/
> > >
> >
> >
> >
> --
> Trustin Lee - Principal Software Engineer, JBoss, Red Hat
> --
> what we call human nature is actually human habit
> --
> http://gleamynode.net/
>
--
--------------------------------
Talent hits a target no one else can hit; Genius hits a target no one
else can see.
Re: Session creation overhead (Was: Re: performance of
NioDatagramConnector.connect())
Posted by "이희승 (Trustin Lee)" <tr...@gmail.com>.
I just wrote a very simple JUnit test case that measures the amount of
time taken to connect and disconnect from MINA server. It takes only a
few minutes to write it, and the improvement was obvious (about 2~3
times faster)
I used YourKit profiler to track this problem, and I compared MINA with
simple blocking I/O server to see the difference. The difference was
mostly related with getting/setting socket parameters and thread
creation. However, YourKit didn't tell me about the difference
directly, so I have to crawl around the call tree to find the
bottleneck.
For #2, we could refactor IoSessionDataStructureFactory to add such a
factory method and make it all-in-one factory for such data structures.
2008-03-11 (화), 13:28 -0400, Mark Webb 쓰시길:
> Trustin,
>
> Could you provide more information on the way in which you test and
> profile MINA. I would like to set up a test environment, but am
> curious as to how others set theirs up. As for the profiler, I use
> YourKit and have been really happy with it.
>
> The only comment I have is for #2. Could we let the user pass in
> their own Queue implementation? This is done in different
> programs/API's I have seen. Case in point, ThreadPoolExecutor; it
> allows the user to pass in an instance of a BlockingQueue.
>
> Just my $.02
>
> Mark
>
> On Tue, Mar 11, 2008 at 10:11 AM, 이희승 (Trustin Lee) <tr...@gmail.com> wrote:
> > 1a) has been resolved:
> >
> > https://issues.apache.org/jira/browse/DIRMINA-546
> >
> > Performance has been significantly improved for short-living
> > connections.
> >
> > I didn't resolve '1b)' because these getter calls are essential in most
> > case. For example, they are usually used for logging.
> >
> > 3) also has been resolved:
> >
> > https://issues.apache.org/jira/browse/DIRMINA-547
> >
> > Now frequent connection attempt shouldn't cause any overhead comparing
> > to blocking I/O.
> >
> > I'm still not sure how much we will gain by fixing 2) and 4), so I
> > didn't fix them yet.
> >
> > 2008-01-17 (목), 12:26 +0900, Trustin Lee 쓰시길:
> >
> >
> > > I did some profiling and found some bottlenecks (in the order of
> > > importance both in terms of API stability and performance):
> > >
> > > 1) Too many system calls on session creation.
> > > 1a) MINA calls all Socket.setProperty() methods even if they are all
> > > same with the default values.
> > > - We need to change how the configuration works. For example, we
> > > could use non-primitive types such as Integer to allow null, which
> > > means default.
> > > 1b) MINA calls Socket.getProperty() methods immediately on session
> > > creation (e.g. Socket.getLocalAddress())
> > > - Lazy initialization?
> > >
> > > 2) ConcurrentLinkedQueue
> > > - It performs bad comparing to synchronized CircularQueue when the
> > > number of accessing threads are very small. We could allow a user to
> > > change the queue implementation for each operation (e.g. accepting a
> > > new session and write request).
> > >
> > > 3) IdleSessionChecker.addService() and removeService()
> > > - It creates and destroys a thread too often when there's only one
> > > connection. We could refactor it so IdleSessionChecker is not a
> > > singleton and the service (e.g. DatagramConnector) can control its
> > > life cycle. It will be a daemon thread anyway just in case a user
> > > forgot to call dispose().
> > >
> > > 4) ThreadPoolExecutor.execute()
> > > - Even if ThreadPoolExecutor is used to minimize the overhead of
> > > thread creation, ThreadPoolExecutor.execute() has some inevitable
> > > overhead comparing to direct system call.
> > >
> > > 5) Readiness selection model (NIO / epoll) itself
> > > - It's non-blocking and requires an additional signaling and
> > > notification between two threads. It causes inevitable latency which
> > > looks relatively big when the number of managed connections is small.
> > > I think we don't need to fix this anyways.
> > >
> > > Fixing #1-3 is not that difficult but need some changes in the API.
> > > I'd like to get some feed back before I proceed.
> > >
> > > I don't have specific solution for #4; we need more investigation if
> > > it's really big overhead. Probably we could measure again after
> > > fixing #1-3.
> > >
> > > Thanks for the feed back in advance,
> > > Trustin
> > >
> > > On Jan 9, 2008 3:19 AM, Mike Heath <mh...@apache.org> wrote:
> > > > Thank you for the benchmarks. This is very valuable information.
> > > > Unfortunately, we haven't done a lot of performance turning on MINA UDP.
> > > > This is something that we should address in MINA 2.0. Wilson, would
> > > > you please log a JIRA issue with your benchmarks so that we can schedule
> > > > time to work on this?
> > > >
> > > > -Mike
> > > >
> > > >
> > > > Wilson Yeung wrote:
> > > > > I benchmarked Mina 2.0's NioDatagramConnector vs java.net.DatagramSocket on a
> > > > > Linux 2.6 kernel.
> > > > >
> > > > > Mina 2.0 NioDatagramConnector, connect(), future.addListener(),
> > > > > session.close()
> > > > > 100,000 iterations
> > > > > ~20 seconds
> > > > > ~5,000 per second
> > > > >
> > > > > java.net.DatagramSocket, connect(), disconnect(), close()
> > > > > 100,000 iterations
> > > > > ~2-3 seconds
> > > > > ~30,000 to 50,000 per second
> > > > >
> > > > >
> > > >
> > > >
> > >
> > >
> > >
> > --
> > Trustin Lee - Principal Software Engineer, JBoss, Red Hat
> > --
> >
> >
> > what we call human nature is actually human habit
> > --
> > http://gleamynode.net/
> >
>
>
>
--
Trustin Lee - Principal Software Engineer, JBoss, Red Hat
--
what we call human nature is actually human habit
--
http://gleamynode.net/