You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by zh...@apache.org on 2018/05/03 20:17:28 UTC

[incubator-mxnet] branch master updated: Added more information to API Docs for Python and Gluon. (#10622)

This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
     new 4e68b3b  Added more information to API Docs for Python and Gluon. (#10622)
4e68b3b is described below

commit 4e68b3bc1579869fa9cef4e82b7132b8cb441b4e
Author: Thom Lane <th...@gmail.com>
AuthorDate: Thu May 3 13:17:20 2018 -0700

    Added more information to API Docs for Python and Gluon. (#10622)
---
 docs/api/python/gluon/gluon.md | 77 +++++++++++++++++++++++++++++++++++++++---
 docs/api/python/index.md       | 18 ++++++----
 2 files changed, 85 insertions(+), 10 deletions(-)

diff --git a/docs/api/python/gluon/gluon.md b/docs/api/python/gluon/gluon.md
index f523e64..9bf866d 100644
--- a/docs/api/python/gluon/gluon.md
+++ b/docs/api/python/gluon/gluon.md
@@ -9,10 +9,79 @@
 
 ## Overview
 
-Gluon package is a high-level interface for MXNet designed to be easy to use while
-keeping most of the flexibility of low level API. Gluon supports both imperative
-and symbolic programming, making it easy to train complex models imperatively
-in Python and then deploy with symbolic graph in C++ and Scala.
+The Gluon package is a high-level interface for MXNet designed to be easy to use, while keeping most of the flexibility of a low level API. Gluon supports both imperative and symbolic programming, making it easy to train complex models imperatively in Python and then deploy with a symbolic graph in C++ and Scala.
+
+Based on the the [Gluon API specification](https://github.com/gluon-api/gluon-api), the Gluon API in Apache MXNet provides a clear, concise, and simple API for deep learning. It makes it easy to prototype, build, and train deep learning models without sacrificing training speed.
+
+**Advantages**
+
+1. Simple, Easy-to-Understand Code: Gluon offers a full set of plug-and-play neural network building blocks, including predefined layers, optimizers, and initializers.
+2. Flexible, Imperative Structure: Gluon does not require the neural network model to be rigidly defined, but rather brings the training algorithm and model closer together to provide flexibility in the development process.
+3. Dynamic Graphs: Gluon enables developers to define neural network models that are dynamic, meaning they can be built on the fly, with any structure, and using any of Python’s native control flow.
+4. High Performance: Gluon provides all of the above benefits without impacting the training speed that the underlying engine provides. 
+
+**Examples**
+
+*Simple, Easy-to-Understand Code*
+
+Use plug-and-play neural network building blocks, including predefined layers, optimizers, and initializers:
+
+```
+net = gluon.nn.Sequential()
+# When instantiated, Sequential stores a chain of neural network layers. 
+# Once presented with data, Sequential executes each layer in turn, using 
+# the output of one layer as the input for the next
+with net.name_scope():
+    net.add(gluon.nn.Dense(256, activation="relu")) # 1st layer (256 nodes)
+    net.add(gluon.nn.Dense(256, activation="relu")) # 2nd hidden layer
+    net.add(gluon.nn.Dense(num_outputs))
+```
+
+*Flexible, Imperative Structure*
+
+Prototype, build, and train neural networks in fully imperative manner using the MXNet autograd package and the Gluon trainer method:
+
+```
+epochs = 10
+
+for e in range(epochs):
+    for i, (data, label) in enumerate(train_data):
+        with autograd.record():
+            output = net(data) # the forward iteration
+            loss = softmax_cross_entropy(output, label)
+            loss.backward()
+        trainer.step(data.shape[0])
+```
+
+*Dynamic Graphs*
+
+Build neural networks on the fly for use cases where neural networks must change in size and shape during model training:
+
+```
+def forward(self, F, inputs, tree):
+    children_outputs = [self.forward(F, inputs, child)
+                        for child in tree.children]
+    #Recursively builds the neural network based on each input sentence’s
+    #syntactic structure during the model definition and training process
+    ...
+```
+
+*High Performance*
+
+Easily cache the neural network to achieve high performance by defining your neural network with *HybridSequential* and calling the *hybridize* method:
+
+```
+net = nn.HybridSequential()
+with net.name_scope():
+    net.add(nn.Dense(256, activation="relu"))
+    net.add(nn.Dense(128, activation="relu"))
+    net.add(nn.Dense(2))
+    
+net.hybridize()
+```
+
+
+## Contents
 
 ```eval_rst
 .. toctree::
diff --git a/docs/api/python/index.md b/docs/api/python/index.md
index b097e20..88e8031 100644
--- a/docs/api/python/index.md
+++ b/docs/api/python/index.md
@@ -1,10 +1,14 @@
 # MXNet - Python API
 
-MXNet provides a rich Python API to serve a broad community of Python developers.
-In this section, we provide an in-depth discussion of the functionality provided by
-various MXNet Python packages. We have included code samples for most of the APIs
-for improved clarity. These code samples will run as-is as long as MXNet is first
-imported by running:
+MXNet provides a comprehensive and flexible Python API to serve a broad community of developers with different levels of experience and wide ranging requirements. In this section, we provide an in-depth discussion of the functionality provided by various MXNet Python packages.
+
+MXNet's Python API has two primary high-level packages*: the Gluon API and Module API. We recommend that new users start with the Gluon API as it's more flexible and easier to debug. Underlying these high-level packages are the core packages of NDArray and Symbol.
+
+NDArray works with arrays in an imperative fashion, i.e. you define how arrays will be transformed to get to an end result. Symbol works with arrays in a declarative fashion, i.e. you define the end result that is required (via a symbolic graph) and the MXNet engine will use various optimizations to determine the steps required to obtain this. With NDArray you have a great deal of flexibility when composing operations (as you can use Python control flow), and you can easily step through  [...]
+
+Module API is backed by Symbol, so, although it's very performant, it's also a little more restrictive. With the Gluon API, you can get the best of both worlds. You can develop and test your model imperatively using NDArray, a then switch to Symbol for faster model training and inference (if Symbol equivalents exist for your operations).
+
+Code examples are placed throughout the API documentation and these can be run after importing MXNet as follows:
 
 ```python
 >>> import mxnet as mx
@@ -12,13 +16,15 @@ imported by running:
 
 ```eval_rst
 
-.. note:: A convenient way to execute examples is the ``%doctest_mode`` mode of
+.. note:: A convenient way to execute code examples is using the ``%doctest_mode`` mode of
     Jupyter notebook, which allows for pasting multi-line examples containing
     ``>>>`` while preserving indentation. Run ``%doctest_mode?`` in Jupyter notebook
     for more details.
 
 ```
 
+\* Some old references to Model API may exist, but this API has been deprecated.
+
 ## NDArray API
 
 ```eval_rst

-- 
To stop receiving notification emails like this one, please contact
zhasheng@apache.org.