You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@singa.apache.org by wa...@apache.org on 2020/09/28 09:34:08 UTC

[singa-doc] 03/07: update docs for v3.1.0.rc1

This is an automated email from the ASF dual-hosted git repository.

wangwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/singa-doc.git

commit 13b689086d82b004b06187f01b937c68fc5bff5b
Author: wang wei <wa...@gmail.com>
AuthorDate: Wed Sep 23 14:25:53 2020 +0800

    update docs for v3.1.0.rc1
---
 docs-site/docs/autograd.md       | 20 ++++++++--------
 docs-site/docs/graph.md          | 50 ++++++++++++++++++++--------------------
 docs-site/docs/installation.md   |  4 ++--
 docs-site/docs/software-stack.md | 22 ++++++++++++++----
 4 files changed, 54 insertions(+), 42 deletions(-)

diff --git a/docs-site/docs/autograd.md b/docs-site/docs/autograd.md
index 4d42070..ece3b53 100644
--- a/docs-site/docs/autograd.md
+++ b/docs-site/docs/autograd.md
@@ -190,21 +190,21 @@ for epoch in range(epochs):
             sgd.update(p, gp)
 ```
 
-### Using the Module API
+### Using the Model API
 
 The following
-[example](https://github.com/apache/singa/blob/master/examples/autograd/cnn_module.py)
-implements a CNN model using the Module provided by the module.
+[example](https://github.com/apache/singa/blob/master/examples/cnn/model/cnn.py)
+implements a CNN model using the [Model API](./graph).
 
-#### Define the subclass of Module
+#### Define the subclass of Model
 
-Define the model class, it should be the subclass of the Module. In this way,
-all operations used during traing phase will form a calculation graph and will
-be analyzed. The operations in the graph will be scheduled and executed
-efficiently. Layers can also be included in the module class.
+Define the model class, it should be the subclass of Model. In this way, all
+operations used during the training phase will form a computational graph and
+will be analyzed. The operations in the graph will be scheduled and executed
+efficiently. Layers can also be included in the model class.
 
 ```python
-class MLP(module.Module):  # the model is a subclass of Module
+class MLP(model.Model):  # the model is a subclass of Model
 
     def __init__(self, optimizer):
         super(MLP, self).__init__()
@@ -262,5 +262,5 @@ for i in range(niters):
 ### Python API
 
 Refer
-[here](https://singa.readthedocs.io/en/latest/docs/autograd.html#module-singa.autograd)
+[here](https://singa.readthedocs.io/en/latest/autograd.html#module-singa.autograd)
 for more details of Python API.
diff --git a/docs-site/docs/graph.md b/docs-site/docs/graph.md
index fb4ba69..e1e0334 100644
--- a/docs-site/docs/graph.md
+++ b/docs-site/docs/graph.md
@@ -1,6 +1,6 @@
 ---
 id: graph
-title: Computational Graph
+title: Model
 ---
 
 <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements.  See the NOTICE file distributed with this work for additional information regarding copyright ownership.  The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.  You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed [...]
@@ -13,17 +13,17 @@ edge, all operations form a computational graph. With the computational graph,
 speed and memory optimization can be conducted by scheduling the execution of
 the operations and memory allocation/release intelligently. In SINGA, users only
 need to define the neural network model using the
-[Module](https://github.com/apache/singa/blob/master/python/singa/module.py)
-API. The graph is constructed and optimized at the C++ backend automatically.
+[Model](https://github.com/apache/singa/blob/master/python/singa/model.py) API.
+The graph is constructed and optimized at the C++ backend automatically.
 
 ## Example
 
-The following code illustrates the usage of the `Module` API.
+The following code illustrates the usage of the `Model` API.
 
-1. Implement the new model as a subclass the Module class.
+1. Implement the new model as a subclass the Model class.
 
 ```Python
-class CNN(module.Module):
+class CNN(model.Model):
 
     def __init__(self, optimizer):
         super(CNN, self).__init__()
@@ -96,7 +96,7 @@ A Google Colab notebook of this example is available
 
 More examples:
 
-- [MLP](https://github.com/apache/singa/blob/master/examples/mlp/module.py)
+- [MLP](https://github.com/apache/singa/blob/master/examples/mlp/model.py)
 - [CNN](https://github.com/apache/singa/blob/master/examples/cnn/model/cnn.py)
 - [ResNet](https://github.com/apache/singa/blob/master/examples/cnn/model/resnet.py)
 
@@ -111,12 +111,12 @@ SINGA constructs the computational graph in three steps:
 3. create the nodes and edges based on the dependencies
 
 Take the matrix multiplication operation from the dense layer of a
-[MLP model](https://github.com/apache/singa/blob/master/examples/mlp/module.py)
+[MLP model](https://github.com/apache/singa/blob/master/examples/mlp/model.py)
 as an example. The operation is called in the `forward` function of the MLP
 class
 
 ```python
-class MLP(module.Module):
+class MLP(model.Model):
 
     def forward(self, inputs):
         x = autograd.matmul(inputs, self.w0)
@@ -148,7 +148,7 @@ The `Exec` function of `Device` buffers the function and its arguments. In
 addition, it also has the information about the blocks (a block is a chunk of
 memory for a tensor) to be read and written by this function.
 
-Once `Module.forward()` has been executed once, all operations are buffered by
+Once `Model.forward()` has been executed once, all operations are buffered by
 `Device`. Next, the read/write information of all operations are analyzed to
 create the computational graph. For example, if a block `b` is written by one
 operation O1 and is later read by another operation O2, we would know O2 depends
@@ -311,7 +311,7 @@ Tensor GpuConvBackwardx(const Tensor &dy, const Tensor &W, const Tensor &x,
   - Model
     - Using layer: ResNet50 in
       [resnet.py](https://github.com/apache/singa/blob/master/examples/cnn/autograd/resnet_cifar10.py)
-    - Using module: ResNet50 in
+    - Using model: ResNet50 in
       [resnet.py](https://github.com/apache/singa/blob/master/examples/cnn/model/resnet.py)
   - GPU: NVIDIA RTX 2080Ti
 - Notations
@@ -346,7 +346,7 @@ Tensor GpuConvBackwardx(const Tensor &dy, const Tensor &W, const Tensor &x,
           <td>1.0000</td>
       </tr>
       <tr>
-          <td nowrap>module:disable graph</td>
+          <td nowrap>model:disable graph</td>
           <td>4995</td>
           <td>14.1264</td>
           <td>14.1579</td>
@@ -355,7 +355,7 @@ Tensor GpuConvBackwardx(const Tensor &dy, const Tensor &W, const Tensor &x,
           <td>1.0049</td>
       </tr>
       <tr>
-          <td nowrap>module:enable graph, bfs</td>
+          <td nowrap>model:enable graph, bfs</td>
           <td>3283</td>
           <td>13.7438</td>
           <td>14.5520</td>
@@ -364,7 +364,7 @@ Tensor GpuConvBackwardx(const Tensor &dy, const Tensor &W, const Tensor &x,
           <td>1.0328</td>
       </tr>
       <tr>
-          <td nowrap>module:enable graph, serial</td>
+          <td nowrap>model:enable graph, serial</td>
           <td>3265</td>
           <td>13.7420</td>
           <td>14.5540</td>
@@ -383,7 +383,7 @@ Tensor GpuConvBackwardx(const Tensor &dy, const Tensor &W, const Tensor &x,
           <td>1.0000</td>
       </tr>
       <tr>
-          <td nowrap>module:disable graph</td>
+          <td nowrap>model:disable graph</td>
           <td>10109</td>
           <td>13.2952</td>
           <td>7.5315</td>
@@ -392,7 +392,7 @@ Tensor GpuConvBackwardx(const Tensor &dy, const Tensor &W, const Tensor &x,
           <td>1.0123</td>
       </tr>
       <tr>
-          <td nowrap>module:enable graph, bfs</td>
+          <td nowrap>model:enable graph, bfs</td>
           <td>6839</td>
           <td>13.1059</td>
           <td>7.6302</td>
@@ -401,7 +401,7 @@ Tensor GpuConvBackwardx(const Tensor &dy, const Tensor &W, const Tensor &x,
           <td>1.0269</td>
       </tr>
       <tr>
-          <td nowrap>module:enable graph, serial</td>
+          <td nowrap>model:enable graph, serial</td>
           <td>6845</td>
           <td>13.0489</td>
           <td>7.6635</td>
@@ -414,10 +414,10 @@ Tensor GpuConvBackwardx(const Tensor &dy, const Tensor &W, const Tensor &x,
 ### Multi processes
 
 - Experiment settings
-  - Model
+  - API
     - using Layer: ResNet50 in
       [resnet_dist.py](https://github.com/apache/singa/blob/master/examples/cnn/autograd/resnet_dist.py)
-    - using Module: ResNet50 in
+    - using Model: ResNet50 in
       [resnet.py](https://github.com/apache/singa/blob/master/examples/cnn/model/resnet.py)
   - GPU: NVIDIA RTX 2080Ti \* 2
   - MPI: two MPI processes on one node
@@ -445,7 +445,7 @@ Tensor GpuConvBackwardx(const Tensor &dy, const Tensor &W, const Tensor &x,
           <td>1.0000</td>
       </tr>
       <tr>
-          <td nowrap>module:disable graph</td>
+          <td nowrap>model:disable graph</td>
           <td>5427</td>
           <td>17.8232</td>
           <td>11.2213</td>
@@ -454,7 +454,7 @@ Tensor GpuConvBackwardx(const Tensor &dy, const Tensor &W, const Tensor &x,
           <td>0.9725</td>
       </tr>
       <tr>
-          <td nowrap>module:enable graph, bfs</td>
+          <td nowrap>model:enable graph, bfs</td>
           <td>3389</td>
           <td>18.2310</td>
           <td>10.9703</td>
@@ -463,7 +463,7 @@ Tensor GpuConvBackwardx(const Tensor &dy, const Tensor &W, const Tensor &x,
           <td>0.9507</td>
       </tr>
       <tr>
-          <td nowrap>module:enable graph, serial</td>
+          <td nowrap>model:enable graph, serial</td>
           <td>3437</td>
           <td>17.0389</td>
           <td>11.7378</td>
@@ -482,7 +482,7 @@ Tensor GpuConvBackwardx(const Tensor &dy, const Tensor &W, const Tensor &x,
           <td>1.0000</td>
       </tr>
       <tr>
-          <td nowrap>module:disable graph</td>
+          <td nowrap>model:disable graph</td>
           <td>10503</td>
           <td>14.7746</td>
           <td>6.7684</td>
@@ -491,7 +491,7 @@ Tensor GpuConvBackwardx(const Tensor &dy, const Tensor &W, const Tensor &x,
           <td>1.0060</td>
       </tr>
       <tr>
-          <td nowrap>module:enable graph, bfs</td>
+          <td nowrap>model:enable graph, bfs</td>
           <td>6935</td>
           <td>14.8553</td>
           <td>6.7316</td>
@@ -500,7 +500,7 @@ Tensor GpuConvBackwardx(const Tensor &dy, const Tensor &W, const Tensor &x,
           <td>1.0006</td>
       </tr>
       <tr>
-          <td nowrap>module:enable graph, serial</td>
+          <td nowrap>model:enable graph, serial</td>
           <td>7027</td>
           <td>14.3271</td>
           <td>6.9798</td>
diff --git a/docs-site/docs/installation.md b/docs-site/docs/installation.md
index 48e44d1..64af47e 100644
--- a/docs-site/docs/installation.md
+++ b/docs-site/docs/installation.md
@@ -65,7 +65,7 @@ pip install singa -f http://singa.apache.org/docs/next/wheel-cpu.html --trusted-
 ```
 
 You can install a specific version of SINGA via `singa==<version>`, where the
-`<version>` field should be replaced, e.g., `3.0.0`. The available SINGA
+`<version>` field should be replaced, e.g., `3.1.0`. The available SINGA
 versions are listed at the link.
 
 To install the latest develop version, replace the link with
@@ -78,7 +78,7 @@ pip install singa -f http://singa.apache.org/docs/next/wheel-cuda.html --trusted
 ```
 
 You can also configure SINGA version and the CUDA version, like
-`singa==3.0.0+cuda10.2`. The available combinations of SINGA version and CUDA
+`singa==3.1.0+cuda10.2`. The available combinations of SINGA version and CUDA
 version are listed at the link.
 
 To install the latest develop version, replace the link with
diff --git a/docs-site/docs/software-stack.md b/docs-site/docs/software-stack.md
index c4244f0..620ed61 100644
--- a/docs-site/docs/software-stack.md
+++ b/docs-site/docs/software-stack.md
@@ -12,7 +12,19 @@ learning models, hardware abstractions for scheduling and executing operations,
 and communication components for distributed training. The Python interface
 wraps some CPP data structures and provides additional high-level classes for
 neural network training, which makes it convenient to implement complex neural
-network models. Next, we introduce the software stack in a bottom-up manner.
+network models.
+
+SINGA's programming model enjoys the advantages of imperative programming and
+declarative programming. Users define the network structure and the training
+procedure (data flow) via imperative programming like PyTorch.  
+Different to PyTorch which recreates the operations in every iteration, SINGA
+buffers the operations to create a computational graph implicitly (when this
+feature is enabled) after the first iteration. The graph is similar to that
+created by libraries using declarative programming, e.g., TensorFlow. Therefore,
+SINGA can apply the memory and speed optimization techniques over the
+computational graph.
+
+Next, we introduce the software stack in a bottom-up manner.
 
 ![SINGA V3 software stack](assets/singav3-sw.png) <br/> **Figure 1 - SINGA V3
 software stack.**
@@ -126,11 +138,11 @@ backward functions automatically in the reverse order. All functions can be
 buffered by the `Scheduler` to create a [computational graph](./graph) for
 efficiency and memory optimization.
 
-### Module
+### Model
 
-`Module` provides an easy interface to implement new network models. You just
-need to inherit `Module` and define the forward propagation of the model by
-creating and calling the layers or operators. `Module` will do autograd and
+[Model](./graph) provides an easy interface to implement new network models. You
+just need to inherit `Model` and define the forward propagation of the model by
+creating and calling the layers or operators. `Model` will do autograd and
 update the parameters via `Opt` automatically when training data is fed into it.
 
 ### ONNX