You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@singa.apache.org by zh...@apache.org on 2022/02/25 07:09:10 UTC

[singa] branch dev updated: update optimizer for bloodmnist

This is an automated email from the ASF dual-hosted git repository.

zhaojing pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/singa.git


The following commit(s) were added to refs/heads/dev by this push:
     new 23387f4  update optimizer for bloodmnist
     new a234b91  Merge pull request #927 from wannature/singa_v4
23387f4 is described below

commit 23387f420a49c8bf68e4bb2361bbe84f9d0a2251
Author: wenqiao zhang <we...@zju.edu.cn>
AuthorDate: Fri Feb 25 13:21:15 2022 +0800

    update optimizer for bloodmnist
---
 examples/demos/Classification/BloodMnist/ClassDemo.py | 1 -
 1 file changed, 1 deletion(-)

diff --git a/examples/demos/Classification/BloodMnist/ClassDemo.py b/examples/demos/Classification/BloodMnist/ClassDemo.py
index 71a0fa6..1d17150 100644
--- a/examples/demos/Classification/BloodMnist/ClassDemo.py
+++ b/examples/demos/Classification/BloodMnist/ClassDemo.py
@@ -246,7 +246,6 @@ criterion = layer.SoftMaxCrossEntropy()
 # optimizer_ft = opt.SGD(lr=0.005, momentum=0.9, weight_decay=1e-5, dtype=singa_dtype["float32"])
 optimizer_ft = opt.Adam(lr=1e-3)
 # optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=lr)
-# lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer_ft, mode='max', patience=5, threshold=1e-3)
 
 # %% start training
 dev = device.create_cpu_device()