You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ignite.apache.org by "Alexey Platonov (JIRA)" <ji...@apache.org> on 2018/09/05 13:07:00 UTC

[jira] [Commented] (IGNITE-9413) [ML] Learning rate optimization for GDB.

    [ https://issues.apache.org/jira/browse/IGNITE-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16604377#comment-16604377 ] 

Alexey Platonov commented on IGNITE-9413:
-----------------------------------------

I think than current task must be resolved later because of:
1) No one GDB in popular libraries has rate optimizer. Often users select learning rate as metaparameter by grid-search;
2) Rate optimizer adds new metaparemeters complicating GDB using and tuning. It is often easier to specify a rough value for the convergence precision parameter, select gradient step size and later increase convergence precision.

In this way this task has low priority and we need more investigations for this task.

> [ML] Learning rate optimization for GDB.
> ----------------------------------------
>
>                 Key: IGNITE-9413
>                 URL: https://issues.apache.org/jira/browse/IGNITE-9413
>             Project: Ignite
>          Issue Type: Improvement
>          Components: ml
>            Reporter: Yury Babak
>            Assignee: Alexey Platonov
>            Priority: Major
>             Fix For: 2.7
>
>
> We need to support learning rate optimization while training for MSE-loss and Log-loss



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)