You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "Alexey Zinoviev (Jira)" <ji...@apache.org> on 2020/06/26 08:19:00 UTC
[jira] [Updated] (IGNITE-12685) [ML] [Umbrella] Unify Preprocessors
and Pipeline approaches to collect common statistics
[ https://issues.apache.org/jira/browse/IGNITE-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Alexey Zinoviev updated IGNITE-12685:
-------------------------------------
Fix Version/s: (was: 2.9)
2.10
> [ML] [Umbrella] Unify Preprocessors and Pipeline approaches to collect common statistics
> ------------------------------------------------------------------------------------------
>
> Key: IGNITE-12685
> URL: https://issues.apache.org/jira/browse/IGNITE-12685
> Project: Ignite
> Issue Type: Improvement
> Components: ml
> Reporter: Alexey Zinoviev
> Assignee: Alexey Zinoviev
> Priority: Major
> Fix For: 2.10
>
>
> In the current implementation we have different behavior in Cross-Validation during running on the experimental Pipeline and chain of Preprocessors.
>
> Look at the tutorial step 8 CV_Param_Grid and 8_CV_Param_Grid_and_pipeline
> In the first example all preprocessors fits on the whole dataset and don't use train/test filter (due to limited API in preprocessors), and collects the stat on the whole initial dataset.
>
> In the second example, we have honest re-fitting on each cross-validation fold three times with three different stats. As a result we could get a different encoding values or Max/Min values for each column and so on.
>
> Should learn this question and be in consistency with the most popular approaches.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)