You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "Anton Dmitriev (JIRA)" <ji...@apache.org> on 2019/01/31 11:43:00 UTC

[jira] [Updated] (IGNITE-11137) [ML] IgniteModelStorage

     [ https://issues.apache.org/jira/browse/IGNITE-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Anton Dmitriev updated IGNITE-11137:
------------------------------------
    Description: 
Currently we have integration between machine learning and SQL implemented in IGNITE-11138, IGNITE-11071 and IGNITE-11072. This functionality allows to work with model storage in a straight forward way, user can save model without any checking so that it might be overridden; model is extracted on each predict call and it's very inefficient. The goal of this task is to:
* Add existence checking to model saving functionality and meaningful exception messages;
* Add model caching into predict call so that model is not required to be deserialized on each call.

  was:
We want to wrap Model storage by IgniteModelStorage.

This wrapper should:
*  hide all serialization/deserialization activities for models
*  check args and work with paths which not exist yet
*  cache used models from storage


> [ML] IgniteModelStorage
> -----------------------
>
>                 Key: IGNITE-11137
>                 URL: https://issues.apache.org/jira/browse/IGNITE-11137
>             Project: Ignite
>          Issue Type: Improvement
>          Components: ml
>            Reporter: Yury Babak
>            Assignee: Anton Dmitriev
>            Priority: Major
>             Fix For: 2.8
>
>
> Currently we have integration between machine learning and SQL implemented in IGNITE-11138, IGNITE-11071 and IGNITE-11072. This functionality allows to work with model storage in a straight forward way, user can save model without any checking so that it might be overridden; model is extracted on each predict call and it's very inefficient. The goal of this task is to:
> * Add existence checking to model saving functionality and meaningful exception messages;
> * Add model caching into predict call so that model is not required to be deserialized on each call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)