You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ignite.apache.org by "Anton Dmitriev (JIRA)" <ji...@apache.org> on 2018/11/09 08:10:00 UTC

[jira] [Created] (IGNITE-10201) ML: TensorFlow model inference on Apache Ignite

Anton Dmitriev created IGNITE-10201:
---------------------------------------

             Summary: ML: TensorFlow model inference on Apache Ignite
                 Key: IGNITE-10201
                 URL: https://issues.apache.org/jira/browse/IGNITE-10201
             Project: Ignite
          Issue Type: New Feature
          Components: ml
    Affects Versions: 2.8
            Reporter: Anton Dmitriev
            Assignee: Anton Dmitriev
             Fix For: 2.8


Machine learning pipeline consists of two stages: *model training* and *model inference* _(model training is a process of training a model using existing data with known target values, model inference is a process of making predictions on a new data using trained model)._

It's important that a model can be trained in one environment/system and after that is used for inference in another. A trained model is an immutable object without any side-effects (a pure mathematical function in math language). As result of that, an inference process has an excellent linear scalability characteristics because different inferences can be done in parallel in different threads or on different nodes.

The goal of "TensorFlow model inference on Apache Ignite" is to allow user to easily import pre-trained TensorFlow model into Apache Ignite, distribute it across nodes in a cluster, provide a common interface to call these models to make inference and finally perform load balancing so that all node resources are properly utilized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)