You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ignite.apache.org by Pavel Tupitsyn <pt...@apache.org> on 2020/10/16 17:00:38 UTC

[DISCUSS] Use Netty for Java thin client

Igniters,

I'm working on IEP-51 [1] to make Java thin client truly async
and make sure user threads are never blocked
(right now socket writes are performed from user threads).

I've investigated potential approaches and came to the conclusion
that Netty [2] is our best bet.
- Nice Future-based async API => will greatly reduce our code complexity
  and remove manual thread management
- Potentially reduced resource usage - share EventLoopGroop across all
connections within one IgniteClient
- SSL is easy to use
- Proven performance and reliability

Other approaches, like AsynchronousSocketChannel or selectors, seem to be
too complicated,
especially when SSL comes into play.
We should focus on Ignite-specific work instead of spending time on
reinventing async IO.

The obvious downside is an extra dependency in the core module.
However, I heard some discussions about using Netty for GridNioServer in
future.

Let me know your thoughts.

Pavel

[1]
https://cwiki.apache.org/confluence/display/IGNITE/IEP-51%3A+Java+Thin+Client+Async+API
[2] https://netty.io

Re: [DISCUSS] Use Netty for Java thin client

Posted by Pavel Tupitsyn <pt...@apache.org>.
Alex, Ivan,

Good point about GridNioServer, I'll give it a try.

> java thin client (and other thin clients) should be
> separated from the main ignite repo

This is a big and complex topic, I'd prefer to avoid it here,
please start a separate thread.



On Mon, Oct 19, 2020 at 12:50 PM Ilya Kasnacheev <il...@gmail.com>
wrote:

> Hello!
>
> I think that if we move to netty, we should also move GridNioServer-using
> code to netty. Let's avoid needing expertise in two socket multiplexing
> frameworks.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 19 окт. 2020 г. в 10:52, Ivan Daschinsky <iv...@gmail.com>:
>
> > >> Why can't we use GridNioServer for java thin clients?
> > Yes, we can. Despite the naming, it can be used as client (set port as
> -1),
> > But doesn't have the same set of advantages as Netty. Netty has a way
> > better support (performance) for native transports and SSL, that
> > default java NIO.
> >
> > But API is much, much worse.
> >
> > If our goal is to keep thin client in core module in any circumstances,
> > that this is the only choice.
> >
> > But lets see, for example, at Lettuce (netty based async redis client) -
> > [1]
> > 1. It supports reactive streams (additional module)
> > 2. It supports kotlin coroutines (additional module)
> > I hardly believe, that we could support this in our core module.
> >
> > Why not to consider separation? Why user of our thin client should have
> in
> > his classpath megabytes of unnecessary bytecode?
> >
> >
> > [1] -- https://lettuce.io/core/release/reference/index.html
> >
> > пн, 19 окт. 2020 г. в 10:06, Alex Plehanov <pl...@gmail.com>:
> >
> > > Pavel,
> > >
> > > Why can't we use GridNioServer for java thin clients?
> > > It has the same advantages as Netty (future based async API, SSL, etc)
> > but
> > > without extra dependency.
> > > GridClient (control.sh), for example, uses GridNioServer for
> > communication.
> > >
> > > сб, 17 окт. 2020 г. в 11:21, Ivan Daschinsky <iv...@gmail.com>:
> > >
> > > > Hi.
> > > > >>  Potentially reduced resource usage - share EventLoopGroop across
> > all
> > > > connections within one IgniteClient.
> > > > Not potentially, definitely. Current approach (one receiver thread
> per
> > > > TcpClientChannel and shared FJP for continuation) requires too many
> > > > threads.
> > > > When TcpClientChannel is the only one, it's ok. But if we use
> multiple
> > > > addresses, things become worse.
> > > >
> > > > >> The obvious downside is an extra dependency in the core module.
> > > > There is another downside -- we should rework our transaction's API a
> > > > little bit (Actually,  in netty socket write is performed in other
> > thread
> > > > (channel.write is async) and
> > > > current tx logic will not work
> > > >
> (org.apache.ignite.internal.client.thin.TcpClientCache#writeCacheInfo))
> > > >
> > > > A little bit of offtopic.
> > > > I suppose, that the java thin client (and other thin clients) should
> be
> > > > separated from the main ignite repo and have a separate release
> cycle.
> > > > For example, java thin client depends on default binary
> > > > protocol's implementation, that is notorious for heavy usage of
> > internal
> > > > JDK api and this for example.
> > > > prevents usage of our thin client in graalvm native image.
> > > >
> > > >
> > > > пт, 16 окт. 2020 г. в 20:00, Pavel Tupitsyn <pt...@apache.org>:
> > > >
> > > > > Igniters,
> > > > >
> > > > > I'm working on IEP-51 [1] to make Java thin client truly async
> > > > > and make sure user threads are never blocked
> > > > > (right now socket writes are performed from user threads).
> > > > >
> > > > > I've investigated potential approaches and came to the conclusion
> > > > > that Netty [2] is our best bet.
> > > > > - Nice Future-based async API => will greatly reduce our code
> > > complexity
> > > > >   and remove manual thread management
> > > > > - Potentially reduced resource usage - share EventLoopGroop across
> > all
> > > > > connections within one IgniteClient
> > > > > - SSL is easy to use
> > > > > - Proven performance and reliability
> > > > >
> > > > > Other approaches, like AsynchronousSocketChannel or selectors, seem
> > to
> > > be
> > > > > too complicated,
> > > > > especially when SSL comes into play.
> > > > > We should focus on Ignite-specific work instead of spending time on
> > > > > reinventing async IO.
> > > > >
> > > > > The obvious downside is an extra dependency in the core module.
> > > > > However, I heard some discussions about using Netty for
> GridNioServer
> > > in
> > > > > future.
> > > > >
> > > > > Let me know your thoughts.
> > > > >
> > > > > Pavel
> > > > >
> > > > > [1]
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-51%3A+Java+Thin+Client+Async+API
> > > > > [2] https://netty.io
> > > > >
> > > >
> > > >
> > > > --
> > > > Sincerely yours, Ivan Daschinskiy
> > > >
> > >
> >
> >
> > --
> > Sincerely yours, Ivan Daschinskiy
> >
>

Re: [DISCUSS] Use Netty for Java thin client

Posted by Ilya Kasnacheev <il...@gmail.com>.
Hello!

I think that if we move to netty, we should also move GridNioServer-using
code to netty. Let's avoid needing expertise in two socket multiplexing
frameworks.

Regards,
-- 
Ilya Kasnacheev


пн, 19 окт. 2020 г. в 10:52, Ivan Daschinsky <iv...@gmail.com>:

> >> Why can't we use GridNioServer for java thin clients?
> Yes, we can. Despite the naming, it can be used as client (set port as -1),
> But doesn't have the same set of advantages as Netty. Netty has a way
> better support (performance) for native transports and SSL, that
> default java NIO.
>
> But API is much, much worse.
>
> If our goal is to keep thin client in core module in any circumstances,
> that this is the only choice.
>
> But lets see, for example, at Lettuce (netty based async redis client) -
> [1]
> 1. It supports reactive streams (additional module)
> 2. It supports kotlin coroutines (additional module)
> I hardly believe, that we could support this in our core module.
>
> Why not to consider separation? Why user of our thin client should have in
> his classpath megabytes of unnecessary bytecode?
>
>
> [1] -- https://lettuce.io/core/release/reference/index.html
>
> пн, 19 окт. 2020 г. в 10:06, Alex Plehanov <pl...@gmail.com>:
>
> > Pavel,
> >
> > Why can't we use GridNioServer for java thin clients?
> > It has the same advantages as Netty (future based async API, SSL, etc)
> but
> > without extra dependency.
> > GridClient (control.sh), for example, uses GridNioServer for
> communication.
> >
> > сб, 17 окт. 2020 г. в 11:21, Ivan Daschinsky <iv...@gmail.com>:
> >
> > > Hi.
> > > >>  Potentially reduced resource usage - share EventLoopGroop across
> all
> > > connections within one IgniteClient.
> > > Not potentially, definitely. Current approach (one receiver thread per
> > > TcpClientChannel and shared FJP for continuation) requires too many
> > > threads.
> > > When TcpClientChannel is the only one, it's ok. But if we use multiple
> > > addresses, things become worse.
> > >
> > > >> The obvious downside is an extra dependency in the core module.
> > > There is another downside -- we should rework our transaction's API a
> > > little bit (Actually,  in netty socket write is performed in other
> thread
> > > (channel.write is async) and
> > > current tx logic will not work
> > > (org.apache.ignite.internal.client.thin.TcpClientCache#writeCacheInfo))
> > >
> > > A little bit of offtopic.
> > > I suppose, that the java thin client (and other thin clients) should be
> > > separated from the main ignite repo and have a separate release cycle.
> > > For example, java thin client depends on default binary
> > > protocol's implementation, that is notorious for heavy usage of
> internal
> > > JDK api and this for example.
> > > prevents usage of our thin client in graalvm native image.
> > >
> > >
> > > пт, 16 окт. 2020 г. в 20:00, Pavel Tupitsyn <pt...@apache.org>:
> > >
> > > > Igniters,
> > > >
> > > > I'm working on IEP-51 [1] to make Java thin client truly async
> > > > and make sure user threads are never blocked
> > > > (right now socket writes are performed from user threads).
> > > >
> > > > I've investigated potential approaches and came to the conclusion
> > > > that Netty [2] is our best bet.
> > > > - Nice Future-based async API => will greatly reduce our code
> > complexity
> > > >   and remove manual thread management
> > > > - Potentially reduced resource usage - share EventLoopGroop across
> all
> > > > connections within one IgniteClient
> > > > - SSL is easy to use
> > > > - Proven performance and reliability
> > > >
> > > > Other approaches, like AsynchronousSocketChannel or selectors, seem
> to
> > be
> > > > too complicated,
> > > > especially when SSL comes into play.
> > > > We should focus on Ignite-specific work instead of spending time on
> > > > reinventing async IO.
> > > >
> > > > The obvious downside is an extra dependency in the core module.
> > > > However, I heard some discussions about using Netty for GridNioServer
> > in
> > > > future.
> > > >
> > > > Let me know your thoughts.
> > > >
> > > > Pavel
> > > >
> > > > [1]
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-51%3A+Java+Thin+Client+Async+API
> > > > [2] https://netty.io
> > > >
> > >
> > >
> > > --
> > > Sincerely yours, Ivan Daschinskiy
> > >
> >
>
>
> --
> Sincerely yours, Ivan Daschinskiy
>

Re: [DISCUSS] Use Netty for Java thin client

Posted by Ivan Daschinsky <iv...@gmail.com>.
>> Why can't we use GridNioServer for java thin clients?
Yes, we can. Despite the naming, it can be used as client (set port as -1),
But doesn't have the same set of advantages as Netty. Netty has a way
better support (performance) for native transports and SSL, that
default java NIO.

But API is much, much worse.

If our goal is to keep thin client in core module in any circumstances,
that this is the only choice.

But lets see, for example, at Lettuce (netty based async redis client) - [1]
1. It supports reactive streams (additional module)
2. It supports kotlin coroutines (additional module)
I hardly believe, that we could support this in our core module.

Why not to consider separation? Why user of our thin client should have in
his classpath megabytes of unnecessary bytecode?


[1] -- https://lettuce.io/core/release/reference/index.html

пн, 19 окт. 2020 г. в 10:06, Alex Plehanov <pl...@gmail.com>:

> Pavel,
>
> Why can't we use GridNioServer for java thin clients?
> It has the same advantages as Netty (future based async API, SSL, etc) but
> without extra dependency.
> GridClient (control.sh), for example, uses GridNioServer for communication.
>
> сб, 17 окт. 2020 г. в 11:21, Ivan Daschinsky <iv...@gmail.com>:
>
> > Hi.
> > >>  Potentially reduced resource usage - share EventLoopGroop across all
> > connections within one IgniteClient.
> > Not potentially, definitely. Current approach (one receiver thread per
> > TcpClientChannel and shared FJP for continuation) requires too many
> > threads.
> > When TcpClientChannel is the only one, it's ok. But if we use multiple
> > addresses, things become worse.
> >
> > >> The obvious downside is an extra dependency in the core module.
> > There is another downside -- we should rework our transaction's API a
> > little bit (Actually,  in netty socket write is performed in other thread
> > (channel.write is async) and
> > current tx logic will not work
> > (org.apache.ignite.internal.client.thin.TcpClientCache#writeCacheInfo))
> >
> > A little bit of offtopic.
> > I suppose, that the java thin client (and other thin clients) should be
> > separated from the main ignite repo and have a separate release cycle.
> > For example, java thin client depends on default binary
> > protocol's implementation, that is notorious for heavy usage of internal
> > JDK api and this for example.
> > prevents usage of our thin client in graalvm native image.
> >
> >
> > пт, 16 окт. 2020 г. в 20:00, Pavel Tupitsyn <pt...@apache.org>:
> >
> > > Igniters,
> > >
> > > I'm working on IEP-51 [1] to make Java thin client truly async
> > > and make sure user threads are never blocked
> > > (right now socket writes are performed from user threads).
> > >
> > > I've investigated potential approaches and came to the conclusion
> > > that Netty [2] is our best bet.
> > > - Nice Future-based async API => will greatly reduce our code
> complexity
> > >   and remove manual thread management
> > > - Potentially reduced resource usage - share EventLoopGroop across all
> > > connections within one IgniteClient
> > > - SSL is easy to use
> > > - Proven performance and reliability
> > >
> > > Other approaches, like AsynchronousSocketChannel or selectors, seem to
> be
> > > too complicated,
> > > especially when SSL comes into play.
> > > We should focus on Ignite-specific work instead of spending time on
> > > reinventing async IO.
> > >
> > > The obvious downside is an extra dependency in the core module.
> > > However, I heard some discussions about using Netty for GridNioServer
> in
> > > future.
> > >
> > > Let me know your thoughts.
> > >
> > > Pavel
> > >
> > > [1]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-51%3A+Java+Thin+Client+Async+API
> > > [2] https://netty.io
> > >
> >
> >
> > --
> > Sincerely yours, Ivan Daschinskiy
> >
>


-- 
Sincerely yours, Ivan Daschinskiy

Re: [DISCUSS] Use Netty for Java thin client

Posted by Alex Plehanov <pl...@gmail.com>.
Pavel,

Why can't we use GridNioServer for java thin clients?
It has the same advantages as Netty (future based async API, SSL, etc) but
without extra dependency.
GridClient (control.sh), for example, uses GridNioServer for communication.

сб, 17 окт. 2020 г. в 11:21, Ivan Daschinsky <iv...@gmail.com>:

> Hi.
> >>  Potentially reduced resource usage - share EventLoopGroop across all
> connections within one IgniteClient.
> Not potentially, definitely. Current approach (one receiver thread per
> TcpClientChannel and shared FJP for continuation) requires too many
> threads.
> When TcpClientChannel is the only one, it's ok. But if we use multiple
> addresses, things become worse.
>
> >> The obvious downside is an extra dependency in the core module.
> There is another downside -- we should rework our transaction's API a
> little bit (Actually,  in netty socket write is performed in other thread
> (channel.write is async) and
> current tx logic will not work
> (org.apache.ignite.internal.client.thin.TcpClientCache#writeCacheInfo))
>
> A little bit of offtopic.
> I suppose, that the java thin client (and other thin clients) should be
> separated from the main ignite repo and have a separate release cycle.
> For example, java thin client depends on default binary
> protocol's implementation, that is notorious for heavy usage of internal
> JDK api and this for example.
> prevents usage of our thin client in graalvm native image.
>
>
> пт, 16 окт. 2020 г. в 20:00, Pavel Tupitsyn <pt...@apache.org>:
>
> > Igniters,
> >
> > I'm working on IEP-51 [1] to make Java thin client truly async
> > and make sure user threads are never blocked
> > (right now socket writes are performed from user threads).
> >
> > I've investigated potential approaches and came to the conclusion
> > that Netty [2] is our best bet.
> > - Nice Future-based async API => will greatly reduce our code complexity
> >   and remove manual thread management
> > - Potentially reduced resource usage - share EventLoopGroop across all
> > connections within one IgniteClient
> > - SSL is easy to use
> > - Proven performance and reliability
> >
> > Other approaches, like AsynchronousSocketChannel or selectors, seem to be
> > too complicated,
> > especially when SSL comes into play.
> > We should focus on Ignite-specific work instead of spending time on
> > reinventing async IO.
> >
> > The obvious downside is an extra dependency in the core module.
> > However, I heard some discussions about using Netty for GridNioServer in
> > future.
> >
> > Let me know your thoughts.
> >
> > Pavel
> >
> > [1]
> >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-51%3A+Java+Thin+Client+Async+API
> > [2] https://netty.io
> >
>
>
> --
> Sincerely yours, Ivan Daschinskiy
>

Re: [DISCUSS] Use Netty for Java thin client

Posted by Ivan Daschinsky <iv...@gmail.com>.
Hi.
>>  Potentially reduced resource usage - share EventLoopGroop across all
connections within one IgniteClient.
Not potentially, definitely. Current approach (one receiver thread per
TcpClientChannel and shared FJP for continuation) requires too many threads.
When TcpClientChannel is the only one, it's ok. But if we use multiple
addresses, things become worse.

>> The obvious downside is an extra dependency in the core module.
There is another downside -- we should rework our transaction's API a
little bit (Actually,  in netty socket write is performed in other thread
(channel.write is async) and
current tx logic will not work
(org.apache.ignite.internal.client.thin.TcpClientCache#writeCacheInfo))

A little bit of offtopic.
I suppose, that the java thin client (and other thin clients) should be
separated from the main ignite repo and have a separate release cycle.
For example, java thin client depends on default binary
protocol's implementation, that is notorious for heavy usage of internal
JDK api and this for example.
prevents usage of our thin client in graalvm native image.


пт, 16 окт. 2020 г. в 20:00, Pavel Tupitsyn <pt...@apache.org>:

> Igniters,
>
> I'm working on IEP-51 [1] to make Java thin client truly async
> and make sure user threads are never blocked
> (right now socket writes are performed from user threads).
>
> I've investigated potential approaches and came to the conclusion
> that Netty [2] is our best bet.
> - Nice Future-based async API => will greatly reduce our code complexity
>   and remove manual thread management
> - Potentially reduced resource usage - share EventLoopGroop across all
> connections within one IgniteClient
> - SSL is easy to use
> - Proven performance and reliability
>
> Other approaches, like AsynchronousSocketChannel or selectors, seem to be
> too complicated,
> especially when SSL comes into play.
> We should focus on Ignite-specific work instead of spending time on
> reinventing async IO.
>
> The obvious downside is an extra dependency in the core module.
> However, I heard some discussions about using Netty for GridNioServer in
> future.
>
> Let me know your thoughts.
>
> Pavel
>
> [1]
>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-51%3A+Java+Thin+Client+Async+API
> [2] https://netty.io
>


-- 
Sincerely yours, Ivan Daschinskiy