You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@singa.apache.org by zh...@apache.org on 2022/08/26 07:12:20 UTC

[singa] branch dev updated: Configure number of GPUs to be used

This is an automated email from the ASF dual-hosted git repository.

zhaojing pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/singa.git


The following commit(s) were added to refs/heads/dev by this push:
     new 64470816 Configure number of GPUs to be used
     new 398b918f Merge pull request #989 from NLGithubWP/dev
64470816 is described below

commit 64470816e8fc13b688fee5caf32419f36436cf27
Author: NLGithubWP <xi...@gmail.com>
AuthorDate: Fri Aug 26 13:46:35 2022 +0800

    Configure number of GPUs to be used
---
 examples/cifar_distributed_cnn/autograd/cifar10_multiprocess.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/examples/cifar_distributed_cnn/autograd/cifar10_multiprocess.py b/examples/cifar_distributed_cnn/autograd/cifar10_multiprocess.py
old mode 100755
new mode 100644
index b5e51ad7..df2dba8b
--- a/examples/cifar_distributed_cnn/autograd/cifar10_multiprocess.py
+++ b/examples/cifar_distributed_cnn/autograd/cifar10_multiprocess.py
@@ -26,7 +26,7 @@ if __name__ == '__main__':
     # Generate a NCCL ID to be used for collective communication
     nccl_id = singa.NcclIdHolder()
 
-    # number of GPUs to be used
+    # Configure number of GPUs to be used
     world_size = int(sys.argv[1])
 
     # Testing the experimental partial-parameter update asynchronous training