You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@singa.apache.org by "ASF subversion and git services (JIRA)" <ji...@apache.org> on 2015/12/09 12:03:11 UTC

[jira] [Commented] (SINGA-107) Error from loading pre-trained params for training stacked RBMs

    [ https://issues.apache.org/jira/browse/SINGA-107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15048488#comment-15048488 ] 

ASF subversion and git services commented on SINGA-107:
-------------------------------------------------------

Commit f16b1be6f1d30f3ad3554c52359a69c2f643cd61 in incubator-singa's branch refs/heads/master from [~zhaojing]
[ https://git-wip-us.apache.org/repos/asf?p=incubator-singa.git;h=f16b1be ]

SINGA-107 Error from loading pre-trained params for training stacked RBMs

    Description:
When Params are loaded from checkpoint files, their version numbers will be reset to 0 for fine-tuning as explained in the comments of SINGA-42.
However, if these parameters are not fine-tuned (For example, in https://github.com/apache/incubator-singa/tree/master/examples/rbm, in RBM2, the parameters from RBM1 are not updated), then these parameters' versions would be 0 when they are dumped into the checkpoint files. When these parameters are loaded again for training other models, their versions are 0, hence they should be initialized again according to SINGA-42. In other words, the pre-training is useless.

Currently solution is loading the checkpoint file where each Param is first dumped, so that the latter (correct) Param can override the in-correct Param. Consequently, the version number will not be 0.
For example, in https://github.com/apache/incubator-singa/tree/master/examples/rbm/rbm3.conf , we configure the checkpoint files as:

checkpoint_path: "examples/rbm/rbm2/checkpoint/step6000-worker0"
checkpoint_path: "examples/rbm/rbm1/checkpoint/step6000-worker0"

in order to load w1 and b12 correctly.


> Error from loading pre-trained params for training stacked RBMs
> ---------------------------------------------------------------
>
>                 Key: SINGA-107
>                 URL: https://issues.apache.org/jira/browse/SINGA-107
>             Project: Singa
>          Issue Type: Bug
>            Reporter: ZHAOJING
>
> When Params are loaded from checkpoint files, their version numbers will be reset to 0 for fine-tuning as explained in the comments of SINGA-42.
> However, if these parameters are not fine-tuned (For example, in https://github.com/apache/incubator-singa/tree/master/examples/rbm, in RBM2, the parameters from RBM1 are not updated), then these parameters' versions would be 0 when they are dumped into the checkpoint files. When these parameters are loaded again for training other models, their versions are 0, hence they should be initialized again according to SINGA-42. In other words, the pre-training is useless.
> Currently solution is loading the checkpoint file where each Param is first dumped, so that the latter (correct) Param can override the in-correct Param. Consequently, the version number will not be 0.
> For example, in https://github.com/apache/incubator-singa/tree/master/examples/rbm/rbm3.conf , we configure the checkpoint files as:
> checkpoint_path: "examples/rbm/rbm2/checkpoint/step6000-worker0"
> checkpoint_path: "examples/rbm/rbm1/checkpoint/step6000-worker0"
> in order to load w1 and b12 correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)