You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@singa.apache.org by "wangwei (JIRA)" <ji...@apache.org> on 2017/07/05 05:43:01 UTC
[jira] [Created] (SINGA-329) Support layer freezing during training
(fine-tuning)
wangwei created SINGA-329:
-----------------------------
Summary: Support layer freezing during training (fine-tuning)
Key: SINGA-329
URL: https://issues.apache.org/jira/browse/SINGA-329
Project: Singa
Issue Type: New Feature
Reporter: wangwei
Assignee: wangwei
During fine-tuning (e.g. fine tune the CNN trained over ImageNet on our own dataset), we may want to fix some layers (e.g. bottom layers0 and train other layers (e.g top layers).
This ticket adds an argument (i.e a layer name( for the forward and backward function of FeedForwardNet. The training will freeze the layers before that layer and compute the gradients of parameters after that layer (inclusive).
If you want to freeze the top layers, you don't need to use this argument. Instead, you just ignore the gradients of parameters of the top layers from the backward function.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)