You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by in...@apache.org on 2017/12/14 05:01:51 UTC

[incubator-mxnet] branch v1.0.0 updated: Reorganize the tutorials index page (#9033)

This is an automated email from the ASF dual-hosted git repository.

indhub pushed a commit to branch v1.0.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.0.0 by this push:
     new ec0a65a  Reorganize the tutorials index page (#9033)
ec0a65a is described below

commit ec0a65a6f79c1c46b2ef09412731901a56b8707f
Author: Indhu Bharathi <in...@gmail.com>
AuthorDate: Wed Dec 13 21:01:44 2017 -0800

    Reorganize the tutorials index page (#9033)
    
    * Minor changes to tutorials index page.
    
    * Reorganize the tutorials index page.
---
 docs/tutorials/index.md | 64 ++++++++++++++++++++++++++++++++-----------------
 1 file changed, 42 insertions(+), 22 deletions(-)

diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index d20a821..50f8578 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -1,16 +1,27 @@
 # Tutorials
 
+## NDArray
+
+NDArray is MXNet’s primary tool for storing and transforming data. NDArrays are similar to NumPy's multi-dimensional array. However, they confer a few key advantages. First, NDArrays support asynchronous computation on CPU, GPU, and distributed cloud architectures. Second, they provide support for automatic differentiation. These properties make NDArray an ideal library for machine learning, both for researchers and engineers launching production systems.
+
+- [Manipulate data the MXNet way with ndarray](http://gluon.mxnet.io/chapter01_crashcourse/ndarray.html)
+
+
+## Automatic gradients
+
+MXNet makes it easier to calculate derivatives by automatically calculating them while writing ordinary imperative code. Every time you a make pass through your model, autograd builds a graph on the fly, through which it can immediately backpropagate gradients.
+
+- [Automatic differentiation with autograd](http://gluon.mxnet.io/chapter01_crashcourse/autograd.html)
+
+
 ## Gluon
 
-Gluon is the high-level interface for MXNet. It is more intuitive and easier to use than the lower level interface.
-Gluon supports dynamic (define-by-run) graphs with JIT-compilation to achieve both flexibility and efficiency.
+Gluon is MXNet's imperative API. It is more intuitive and easier to use than the symbolic API. Gluon supports dynamic (define-by-run) graphs with JIT-compilation to achieve both flexibility and efficiency.
 
-This is a selected subset of Gluon tutorials that explains basic usage of Gluon and fundamental concepts in deep learning. For the comprehensive tutorial on Gluon that covers topics from basic statistics and probability theory to reinforcement learning and recommender systems, please see [gluon.mxnet.io](http://gluon.mxnet.io). 
+This is a selected subset of Gluon tutorials that explains basic usage of Gluon and fundamental concepts in deep learning. For the comprehensive tutorial on Gluon that covers topics from basic statistics and probability theory to reinforcement learning and recommender systems, please see gluon.mxnet.io.
 
 ### Basics
 
-- [Manipulate data the MXNet way with ndarray](http://gluon.mxnet.io/chapter01_crashcourse/ndarray.html)
-- [Automatic differentiation with autograd](http://gluon.mxnet.io/chapter01_crashcourse/autograd.html)
 - [Linear regression with gluon](http://gluon.mxnet.io/chapter02_supervised-learning/linear-regression-gluon.html)
 - [Serialization - saving, loading and checkpointing](http://gluon.mxnet.io/chapter03_deep-neural-networks/serialization.html)
 
@@ -24,52 +35,61 @@ This is a selected subset of Gluon tutorials that explains basic usage of Gluon
 
 - [Plumbing: A look under the hood of gluon](http://gluon.mxnet.io/chapter03_deep-neural-networks/plumbing.html)
 - [Designing a custom layer with gluon](http://gluon.mxnet.io/chapter03_deep-neural-networks/custom-layer.html)
-- [Fast, portable neural networks with Gluon HybridBlocks](http://gluon.mxnet.io/chapter07_distributed-learning/hybridize.html)
 - [Training on multiple GPUs with gluon](http://gluon.mxnet.io/chapter07_distributed-learning/multiple-gpus-gluon.html)
 
-## MXNet
 
-These tutorials introduce a few fundamental concepts in deep learning and how to implement them in _MXNet_. The _Basics_ section contains tutorials on manipulating arrays, building networks, loading/preprocessing data, etc. The _Training and Inference_ section talks about implementing Linear Regression, training a Handwritten digit classifier using MLP and CNN, running inferences using a pre-trained model, and lastly, efficiently training a large scale image classifier.
+## Symbolic Interface
+
+MXNet's symbolic interface lets users define a computation graph first and then execute it using MXNet. This enables MXNet to perform a lot of optimizations that are not possible in imperative execution (like operator folding and safe reuse of memory used by temporary variables).
 
-### Basics
 
 ```eval_rst
 .. toctree::
    :maxdepth: 1
 
-   basic/ndarray
-   basic/ndarray_indexing
    basic/symbol
    basic/module
    basic/data
+   python/mnist
+   python/predict_image
 ```
 
-### Training and Inference
+
+## Hybrid Networks
+
+Imperative programs are very intuitive to write and are very flexible. But symbolic programs tend to be more efficient. MXNet combines both these paradigms to give users the best of both worlds. Users can write intuitive imperative code during development and MXNet will automatically generate symbolic execution graph for faster execution.
+
+- [Fast, portable neural networks with Gluon HybridBlocks](http://gluon.mxnet.io/chapter07_distributed-learning/hybridize.html)
+
+## Sparse operations
+
+A lot of real-world datasets are very sparse (very few nonzeros). MXNet's sparse operations help store these sparse matrices is a memory efficient way and perform computations on them much faster.
 
 ```eval_rst
 .. toctree::
    :maxdepth: 1
 
-   python/linear-regression
-   python/mnist
-   python/predict_image
-   vision/large_scale_classification
+   sparse/csr
+   sparse/row_sparse
+   sparse/train
 ```
 
-### Sparse NDArray
+## Performance
+
+A lot of real-world datasets are too huge to train models on a single GPU or a single machine. MXNet solves this problem by scaling almost linearly across multiple GPUs and multiple machines.
 
 ```eval_rst
 .. toctree::
    :maxdepth: 1
 
-   sparse/csr
-   sparse/row_sparse
-   sparse/train
+   vision/large_scale_classification
 ```
 
+
 <br>
-More tutorials and examples are available in the GitHub [repository](https://github.com/dmlc/mxnet/tree/master/example).
+More tutorials and examples are available in the GitHub [repository](https://github.com/apache/incubator-mxnet/tree/master/example).
+
 
 ## Contributing Tutorials
 
-Want to contribute an MXNet tutorial? To get started, download the [tutorial template](https://github.com/dmlc/mxnet/tree/master/example/MXNetTutorialTemplate.ipynb).
+Want to contribute an MXNet tutorial? To get started, download the [tutorial template](https://github.com/apache/incubator-mxnet/tree/master/example/MXNetTutorialTemplate.ipynb).

-- 
To stop receiving notification emails like this one, please contact
['"commits@mxnet.apache.org" <co...@mxnet.apache.org>'].