You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Vladimir Pligin <vo...@yandex.ru> on 2021/04/21 14:43:41 UTC

Re: Maximising online ml performance

\+ zaleslaw.sin@

Hi,

It seems that for some reason we've missed this thread (something with nabble
maybe?).

Maybe we could ask Alexey **Zinoviev to have a look at this one if he is
available.**

 **Thanks a lot in advance.**

13.03.2021, 16:36, "rothnorton" <ro...@yahoo.com>:

> After reading the examples etc, more generally, i have two questions:  
>  
> 1) What is the best way to stream json data into a machine learning  
> algorithm to minimise latency of receiving date to the result.  
>  
> Including serialisation into cache from json for vector representation for  
> ml algs, cache setup, datastore style etc. Its unclear from the  
> documentation about this.  
> I'm happy to benchmark different methods, but maybe someone already knows?  
>  
> 2) How does columnstore ml compare to row based ml, and if its better for  
> things like linear regression, is there a comparison?  
>  
> And for both of these, what is the optimal setup (if any) for minimising  
> latency for realtime machine learning, and can the apache ignite  
> infrastructure be modified to achieve this?  
>  
>  
>

>

> \--  
> Sent from: <http://apache-ignite-users.70518.x6.nabble.com/>

\--

Warm Regards,

Vladimir Pligin