You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "Maxim Muzafarov (Jira)" <ji...@apache.org> on 2019/10/03 10:09:00 UTC

[jira] [Updated] (IGNITE-10286) [ML] Umbrella: Model serving

     [ https://issues.apache.org/jira/browse/IGNITE-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Maxim Muzafarov updated IGNITE-10286:
-------------------------------------
    Ignite Flags: Docs Required,Release Notes Required  (was: Docs Required)

> [ML] Umbrella: Model serving
> ----------------------------
>
>                 Key: IGNITE-10286
>                 URL: https://issues.apache.org/jira/browse/IGNITE-10286
>             Project: Ignite
>          Issue Type: New Feature
>          Components: ml
>            Reporter: Yury Babak
>            Assignee: Yury Babak
>            Priority: Major
>             Fix For: 2.8
>
>
> We want to have convenient API for model serving. It means that we need a mechanism for storing models and infer them inside Apache Ignite.
> For now, I see 2 important features - distributed storage for any models and inference.
> From my point of view, we could use some built-in(predefined) cache as model storage. And use service grid for model inference. We could implement some "ModelService" for access to our storage, receive the list of all suitable model(including model metrics and some other information about a model), choose one(or several) and infer it from this service.
> Model from TF should also use the same mechanisms for storing and inference.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)