You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ignite.apache.org by dm...@gmail.com, dm...@gmail.com on 2018/12/17 10:02:48 UTC

What is the best approach to extend Thin Client functionality?

Currently ML/TensorFlow module requires an ability to expose some functionality to be used in C++ code. 

As far as I understand, currently Ignite provides an ability to work with it from C++ only through the Thin Client. The list of operations supported by it is very limited. What is the best approach to work with additional Ignite functionality (like ML/TensorFlow) from C++ code?

I see several ways we can do it:
1. Extend list of Thin Client operations. Unfortunately, it will lead to overgrowth of API. As result of that it will be harder to implement and maintain Thin Clients for different languages.
2. Use Thin Client as a "transport layer" and make Ignite functionality calls via puts/gets commands/responses into/from cache (like command pattern). It's looks a bit confusing to use cache with put/get operations as a transport.
3. Add custom endpoint that will listen specific port and process custom commands. It will introduce a new endpoint and a new protocol.

What do you think about these approaches? Could you suggest any other ways?

To have more concrete discussion lets say we need to functions available from C++: "saveModel(name, model)", "getModel(name)" already implemented in Ignite ML and available via Java API.

Re: What is the best approach to extend Thin Client functionality?

Posted by Denis Magda <dm...@apache.org>.
Hello Anton,

Is this for TensorFlow only or for ML algorithms Ignite supplies out of the
box? Also, do you need C++ for the training phase?

--
Denis

On Mon, Dec 17, 2018 at 2:02 AM dmitrievanthony@gmail.com <
dmitrievanthony@gmail.com> wrote:

> Currently ML/TensorFlow module requires an ability to expose some
> functionality to be used in C++ code.
>
> As far as I understand, currently Ignite provides an ability to work with
> it from C++ only through the Thin Client. The list of operations supported
> by it is very limited. What is the best approach to work with additional
> Ignite functionality (like ML/TensorFlow) from C++ code?
>
> I see several ways we can do it:
> 1. Extend list of Thin Client operations. Unfortunately, it will lead to
> overgrowth of API. As result of that it will be harder to implement and
> maintain Thin Clients for different languages.
> 2. Use Thin Client as a "transport layer" and make Ignite functionality
> calls via puts/gets commands/responses into/from cache (like command
> pattern). It's looks a bit confusing to use cache with put/get operations
> as a transport.
> 3. Add custom endpoint that will listen specific port and process custom
> commands. It will introduce a new endpoint and a new protocol.
>
> What do you think about these approaches? Could you suggest any other ways?
>
> To have more concrete discussion lets say we need to functions available
> from C++: "saveModel(name, model)", "getModel(name)" already implemented in
> Ignite ML and available via Java API.
>

Re: What is the best approach to extend Thin Client functionality?

Posted by Ilya Kasnacheev <il...@gmail.com>.
Hello!

We absolutely have C++ thick client and can implement very complex API on
top of it.

However, there is also push towards thin clients currently since they're
easier to get working.

So naturally you have to decide according to your use cases.

Bonus points if you find a way to go with thin client, that you can also
have the same functionality in Python client, etc...

Regards,
-- 
Ilya Kasnacheev


пн, 17 дек. 2018 г. в 13:02, dmitrievanthony@gmail.com <
dmitrievanthony@gmail.com>:

> Currently ML/TensorFlow module requires an ability to expose some
> functionality to be used in C++ code.
>
> As far as I understand, currently Ignite provides an ability to work with
> it from C++ only through the Thin Client. The list of operations supported
> by it is very limited. What is the best approach to work with additional
> Ignite functionality (like ML/TensorFlow) from C++ code?
>
> I see several ways we can do it:
> 1. Extend list of Thin Client operations. Unfortunately, it will lead to
> overgrowth of API. As result of that it will be harder to implement and
> maintain Thin Clients for different languages.
> 2. Use Thin Client as a "transport layer" and make Ignite functionality
> calls via puts/gets commands/responses into/from cache (like command
> pattern). It's looks a bit confusing to use cache with put/get operations
> as a transport.
> 3. Add custom endpoint that will listen specific port and process custom
> commands. It will introduce a new endpoint and a new protocol.
>
> What do you think about these approaches? Could you suggest any other ways?
>
> To have more concrete discussion lets say we need to functions available
> from C++: "saveModel(name, model)", "getModel(name)" already implemented in
> Ignite ML and available via Java API.
>

Re: What is the best approach to extend Thin Client functionality?

Posted by Igor Sapego <is...@apache.org>.
Hello,

First of all, it would help to know what is the functionality you need,
to give you an answer. Can you scribed required API?

Best Regards,
Igor


On Mon, Dec 17, 2018 at 1:02 PM dmitrievanthony@gmail.com <
dmitrievanthony@gmail.com> wrote:

> Currently ML/TensorFlow module requires an ability to expose some
> functionality to be used in C++ code.
>
> As far as I understand, currently Ignite provides an ability to work with
> it from C++ only through the Thin Client. The list of operations supported
> by it is very limited. What is the best approach to work with additional
> Ignite functionality (like ML/TensorFlow) from C++ code?
>
> I see several ways we can do it:
> 1. Extend list of Thin Client operations. Unfortunately, it will lead to
> overgrowth of API. As result of that it will be harder to implement and
> maintain Thin Clients for different languages.
> 2. Use Thin Client as a "transport layer" and make Ignite functionality
> calls via puts/gets commands/responses into/from cache (like command
> pattern). It's looks a bit confusing to use cache with put/get operations
> as a transport.
> 3. Add custom endpoint that will listen specific port and process custom
> commands. It will introduce a new endpoint and a new protocol.
>
> What do you think about these approaches? Could you suggest any other ways?
>
> To have more concrete discussion lets say we need to functions available
> from C++: "saveModel(name, model)", "getModel(name)" already implemented in
> Ignite ML and available via Java API.
>