You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ignite.apache.org by "Yury Babak (JIRA)" <ji...@apache.org> on 2018/11/15 22:41:00 UTC

[jira] [Created] (IGNITE-10288) [ML] Model inference

Yury Babak created IGNITE-10288:
-----------------------------------

             Summary: [ML] Model inference
                 Key: IGNITE-10288
                 URL: https://issues.apache.org/jira/browse/IGNITE-10288
             Project: Ignite
          Issue Type: New Feature
          Components: ml
            Reporter: Yury Babak


We need a convenient API for model inference. The current idea is to utilize Service Grid for this purpose. We should have two options, first is deliver a model to any node(server or client) and infer this model on that node. The second approach is to pin a model to a specific server and infer model on that server, it could be useful in case if we need some specific hardware which we don't have at any server like a GPU or TPU.

So the first approach is suitable for lightweight models and the second approach is suitable for some complex models like Neural Networks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)