You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@singa.apache.org by zh...@apache.org on 2021/10/08 14:24:46 UTC

[singa] branch dev updated: Update data augmentation implementation for cifar_distributed_cnn example

This is an automated email from the ASF dual-hosted git repository.

zhaojing pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/singa.git


The following commit(s) were added to refs/heads/dev by this push:
     new e8b250a  Update data augmentation implementation for cifar_distributed_cnn example
     new 34cb14d  Merge pull request #894 from NLGithubWP/update-distributed-train-cnn-dev
e8b250a is described below

commit e8b250aa727fa9f6f1b1a79d57b30a8e717f982e
Author: nailixing <xi...@gmail.com>
AuthorDate: Fri Oct 8 21:26:33 2021 +0800

    Update data augmentation implementation for cifar_distributed_cnn example
---
 examples/cifar_distributed_cnn/train_cnn.py | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/examples/cifar_distributed_cnn/train_cnn.py b/examples/cifar_distributed_cnn/train_cnn.py
old mode 100644
new mode 100755
index 26e0403..d102623
--- a/examples/cifar_distributed_cnn/train_cnn.py
+++ b/examples/cifar_distributed_cnn/train_cnn.py
@@ -170,9 +170,6 @@ def run(global_rank,
                                                    train_x, train_y, val_x,
                                                    val_y)
 
-
-
-
     if model.dimension == 4:
         tx = tensor.Tensor(
             (batch_size, num_channels, model.input_size, model.input_size), dev,
@@ -192,6 +189,7 @@ def run(global_rank,
     dev.SetVerbosity(verbosity)
     model.train()
 
+    # Augmentation is done only once before training
     b = 0
     x = train_x[idx[b * batch_size:(b + 1) * batch_size]]
     if model.dimension == 4: