You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by ro...@apache.org on 2019/02/04 20:34:40 UTC

[incubator-mxnet] branch master updated: fix nightly test on tutorials (#14036)

This is an automated email from the ASF dual-hosted git repository.

roshrini pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
     new 9de7e5a  fix nightly test on tutorials (#14036)
9de7e5a is described below

commit 9de7e5a881214018bee3877aa998c971e85fcf6c
Author: Lai Wei <ro...@gmail.com>
AuthorDate: Mon Feb 4 12:34:16 2019 -0800

    fix nightly test on tutorials (#14036)
    
    * fix nightly test
    
    * fix typo
    
    * trigger ci
---
 ci/docker/install/ubuntu_tutorials.sh              |  4 +-
 docs/tutorials/c++/mxnet_cpp_inference_tutorial.md |  8 ++--
 .../gluon/gluon_from_experiment_to_deployment.md   | 39 ++++-----------
 docs/tutorials/gluon/hybrid.md                     | 56 +++++++++++-----------
 docs/tutorials/index.md                            |  4 ++
 5 files changed, 47 insertions(+), 64 deletions(-)

diff --git a/ci/docker/install/ubuntu_tutorials.sh b/ci/docker/install/ubuntu_tutorials.sh
index 404d4bb..60adf46 100755
--- a/ci/docker/install/ubuntu_tutorials.sh
+++ b/ci/docker/install/ubuntu_tutorials.sh
@@ -23,5 +23,5 @@
 set -ex
 apt-get update || true
 apt-get install graphviz python-opencv
-pip2 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz tqdm mxboard
-pip3 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz tqdm mxboard
+pip2 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz tqdm mxboard scipy
+pip3 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz tqdm mxboard scipy
diff --git a/docs/tutorials/c++/mxnet_cpp_inference_tutorial.md b/docs/tutorials/c++/mxnet_cpp_inference_tutorial.md
index e55e7c9..ab55a0e 100644
--- a/docs/tutorials/c++/mxnet_cpp_inference_tutorial.md
+++ b/docs/tutorials/c++/mxnet_cpp_inference_tutorial.md
@@ -4,12 +4,12 @@
 MXNet provides various useful tools and interfaces for deploying your model for inference. For example, you can use [MXNet Model Server](https://github.com/awslabs/mxnet-model-server) to start a service and host your trained model easily.
 Besides that, you can also use MXNet's different language APIs to integrate your model with your existing service. We provide [Python](https://mxnet.incubator.apache.org/api/python/module/module.html),    [Java](https://mxnet.incubator.apache.org/api/java/index.html), [Scala](https://mxnet.incubator.apache.org/api/scala/index.html), and [C++](https://mxnet.incubator.apache.org/api/c++/index.html) APIs.
 
-This tutorial is a continuation of the [Gluon end to end tutorial](https://github.com/apache/incubator-mxnet/tree/master/docs/tutorials/gluon/gluon_from_experiment_to_deployment.md), we will focus on the MXNet C++ API. We have slightly modified the code in [C++ Inference Example](https://github.com/apache/incubator-mxnet/tree/master/cpp-package/example/inference) for our use case.
+This tutorial is a continuation of the [Gluon end to end tutorial](https://mxnet.apache.org/versions/master/tutorials/gluon/gluon_from_experiment_to_deployment.html), we will focus on the MXNet C++ API. We have slightly modified the code in [C++ Inference Example](https://github.com/apache/incubator-mxnet/tree/master/cpp-package/example/inference) for our use case.
 
 ## Prerequisites
 
 To complete this tutorial, you need:
-- Complete the training part of [Gluon end to end tutorial](https://github.com/apache/incubator-mxnet/tree/master/docs/tutorials/gluon/end_to_end_tutorial_training.md)
+- Complete the training part of [Gluon end to end tutorial](https://mxnet.apache.org/versions/master/tutorials/gluon/gluon_from_experiment_to_deployment.html)
 - Learn the basics about [MXNet C++ API](https://github.com/apache/incubator-mxnet/tree/master/cpp-package)
 
 
@@ -20,7 +20,7 @@ The summary of those two documents is that you need to build MXNet from source w
 
 ## Load the model and run inference
 
-After you complete [the previous tutorial](https://github.com/apache/incubator-mxnet/tree/master/docs/tutorials/gluon/end_to_end_tutorial_training.md), you will get the following output files:
+After you complete [the previous tutorial](https://mxnet.apache.org/versions/master/tutorials/gluon/gluon_from_experiment_to_deployment.html), you will get the following output files:
 1. Model Architecture stored in `flower-recognition-symbol.json`
 2. Model parameter values stored in `flower-recognition-0040.params` (`0040` is for 40 epochs we ran)
 3. Label names stored in `synset.txt`
@@ -262,6 +262,6 @@ Now you can explore more ways to run inference and deploy your models:
 
 ## References
 
-1. [Gluon end to end tutorial](https://github.com/apache/incubator-mxnet/tree/master/docs/tutorials/gluon/end_to_end_tutorial_training.md)
+1. [Gluon end to end tutorial](https://mxnet.apache.org/versions/master/tutorials/gluon/gluon_from_experiment_to_deployment.html)
 2. [Gluon C++ inference example](https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/)
 3. [Gluon C++ package](https://github.com/apache/incubator-mxnet/tree/master/cpp-package)
\ No newline at end of file
diff --git a/docs/tutorials/gluon/gluon_from_experiment_to_deployment.md b/docs/tutorials/gluon/gluon_from_experiment_to_deployment.md
index 87e6f24..36c7a2e 100644
--- a/docs/tutorials/gluon/gluon_from_experiment_to_deployment.md
+++ b/docs/tutorials/gluon/gluon_from_experiment_to_deployment.md
@@ -3,7 +3,7 @@
 
 ## Overview
 MXNet Gluon API comes with a lot of great features, and it can provide you everything you need: from experimentation to deploying the model. In this tutorial, we will walk you through a common use case on how to build a model using gluon, train it on your data, and deploy it for inference.
-This tutorial covers training and inference in Python, please continue to [C++ inference part](https://github.com/apache/incubator-mxnet/tree/master/docs/tutorials/c++/mxnet_cpp_inference_tutorial.md) after you finish.
+This tutorial covers training and inference in Python, please continue to [C++ inference part](https://mxnet.incubator.apache.org/versions/master/tutorials/c++/mxnet_cpp_inference_tutorial.html) after you finish.
 
 Let's say you need to build a service that provides flower species recognition. A common problem is that you don't have enough data to train a good model. In such cases, a technique called Transfer Learning can be used to make a more robust model.
 In Transfer Learning we make use of a pre-trained model that solves a related task, and was trained on a very large standard dataset, such as ImageNet. ImageNet is from a different domain, but we can utilize the knowledge in this pre-trained model to perform the new task at hand.
@@ -30,7 +30,7 @@ We have prepared a utility file to help you download and organize your data into
 ```python
 import mxnet as mx
 data_util_file = "oxford_102_flower_dataset.py"
-base_url = "https://raw.githubusercontent.com/roywei/incubator-mxnet/gluon_tutorial/docs/tutorial_utils/data/{}?raw=true"
+base_url = "https://raw.githubusercontent.com/apache/incubator-mxnet/master/docs/tutorial_utils/data/{}?raw=true"
 mx.test_utils.download(base_url.format(data_util_file), fname=data_util_file)
 import oxford_102_flower_dataset
 
@@ -39,28 +39,7 @@ path = './data'
 oxford_102_flower_dataset.get_data(path)
 ```
 
-Now your data will be organized into the following format, all the images belong to the same category will be put together in the following pattern:
-```bash
-data
-|--train
-|   |-- class0
-|   |   |-- image_06736.jpg
-|   |   |-- image_06741.jpg
-...
-|   |-- class1
-|   |   |-- image_06755.jpg
-|   |   |-- image_06899.jpg
-...
-|-- test
-|   |-- class0
-|   |   |-- image_00731.jpg
-|   |   |-- image_0002.jpg
-...
-|   |-- class1
-|   |   |-- image_00036.jpg
-|   |   |-- image_05011.jpg
-
-```
+Now your data will be organized into train, test, and validation sets, images belong to the same class are moved to the same folder.
 
 ## Training using Gluon
 
@@ -83,11 +62,11 @@ from mxnet.gluon.model_zoo.vision import resnet50_v2
 ```
 
 Next, we define the hyper-parameters that we will use for fine-tuning. We will use the [MXNet learning rate scheduler](https://mxnet.incubator.apache.org/tutorials/gluon/learning_rate_schedules.html) to adjust learning rates during training.
-
+Here we set the `epochs` to 1 for quick demonstration, please change to 40 for actual training.
 
 ```python
 classes = 102
-epochs = 40
+epochs = 1
 lr = 0.001
 per_device_batch_size = 32
 momentum = 0.9
@@ -110,7 +89,7 @@ Now we will apply data augmentations on training images. This makes minor altera
 4. Transpose the data from `[height, width, num_channels]` to `[num_channels, height, width]`, and map values from [0, 255] to [0, 1]
 5. Normalize with the mean and standard deviation from the ImageNet dataset.
 
-For validation and inference, we only need to apply step 1, 4, and 5. We also need to save the mean and standard deviation values for [inference using C++](https://github.com/apache/incubator-mxnet/tree/master/docs/tutorials/c++/mxnet_cpp_inference_tutorial.md).
+For validation and inference, we only need to apply step 1, 4, and 5. We also need to save the mean and standard deviation values for [inference using C++](https://mxnet.incubator.apache.org/versions/master/tutorials/c++/mxnet_cpp_inference_tutorial.html).
 
 ```python
 jitter_param = 0.4
@@ -245,7 +224,7 @@ print('[Finished] Test-acc: %.3f' % (test_acc))
 ```
 
 Following is the training result:
-```bash
+```
 [Epoch 40] Train-acc: 0.945, loss: 0.354 | Val-acc: 0.955 | learning-rate: 4.219E-04 | time: 17.8
 [Finished] Test-acc: 0.952
 ```
@@ -309,13 +288,13 @@ print('probability=%f, class=%s' % (prob[idx], labels[idx]))
 ```
 
 Following is the output, you can see the image has been classified as lotus correctly.
-```bash
+```
 probability=9.798435, class=lotus
 ```
 
 ## What's next
 
-You can continue to the [next tutorial](https://github.com/apache/incubator-mxnet/tree/master/docs/tutorials/c++/mxnet_cpp_inference_tutorial.md) on how to load the model we just trained and run inference using MXNet C++ API.
+You can continue to the [next tutorial](https://mxnet.incubator.apache.org/versions/master/tutorials/c++/mxnet_cpp_inference_tutorial.html) on how to load the model we just trained and run inference using MXNet C++ API.
 
 You can also find more ways to run inference and deploy your models here:
 1. [Java Inference examples](https://github.com/apache/incubator-mxnet/tree/master/scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer)
diff --git a/docs/tutorials/gluon/hybrid.md b/docs/tutorials/gluon/hybrid.md
index f11622b..17e9e1b 100644
--- a/docs/tutorials/gluon/hybrid.md
+++ b/docs/tutorials/gluon/hybrid.md
@@ -154,33 +154,33 @@ However, that's not the case in Symbol API. It's not automatically broadcasted,
 
 | NDArray APIs  | Description  |
 |---|---|
-| [*NDArray.\__add\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__add__) | x.\__add\__(y) <=> x+y <=> mx.nd.add(x, y)  |
-| [*NDArray.\__sub\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__sub__) | x.\__sub\__(y) <=> x-y <=> mx.nd.subtract(x, y)  |
-| [*NDArray.\__mul\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__mul__) | x.\__mul\__(y) <=> x*y <=> mx.nd.multiply(x, y)  |
-| [*NDArray.\__div\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__div__) | x.\__div\__(y) <=> x/y <=> mx.nd.divide(x, y)  |
-| [*NDArray.\__mod\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__mod__) | x.\__mod\__(y) <=> x%y <=> mx.nd.modulo(x, y)  |
-| [*NDArray.\__lt\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__lt__) |  x.\__lt\__(y) <=> x<y <=> x mx.nd.lesser(x, y) |
-| [*NDArray.\__le\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__le__) |  x.\__le\__(y) <=> x<=y <=> mx.nd.less_equal(x, y) |
-| [*NDArray.\__gt\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__gt__) |  x.\__gt\__(y) <=> x>y <=> mx.nd.greater(x, y) |
-| [*NDArray.\__ge\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__ge__) |  x.\__ge\__(y) <=> x>=y <=> mx.nd.greater_equal(x, y)|
-| [*NDArray.\__eq\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__eq__) |  x.\__eq\__(y) <=> x==y <=> mx.nd.equal(x, y) |
-| [*NDArray.\__ne\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__ne__) |  x.\__ne\__(y) <=> x!=y <=> mx.nd.not_equal(x, y) |
+| [NDArray.\__add\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__add__) | x.\__add\__(y) <=> x+y <=> mx.nd.add(x, y)  |
+| [NDArray.\__sub\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__sub__) | x.\__sub\__(y) <=> x-y <=> mx.nd.subtract(x, y)  |
+| [NDArray.\__mul\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__mul__) | x.\__mul\__(y) <=> x*y <=> mx.nd.multiply(x, y)  |
+| [NDArray.\__div\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__div__) | x.\__div\__(y) <=> x/y <=> mx.nd.divide(x, y)  |
+| [NDArray.\__mod\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__mod__) | x.\__mod\__(y) <=> x%y <=> mx.nd.modulo(x, y)  |
+| [NDArray.\__lt\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__lt__) |  x.\__lt\__(y) <=> x<y <=> x mx.nd.lesser(x, y) |
+| [NDArray.\__le\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__le__) |  x.\__le\__(y) <=> x<=y <=> mx.nd.less_equal(x, y) |
+| [NDArray.\__gt\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__gt__) |  x.\__gt\__(y) <=> x>y <=> mx.nd.greater(x, y) |
+| [NDArray.\__ge\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__ge__) |  x.\__ge\__(y) <=> x>=y <=> mx.nd.greater_equal(x, y)|
+| [NDArray.\__eq\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__eq__) |  x.\__eq\__(y) <=> x==y <=> mx.nd.equal(x, y) |
+| [NDArray.\__ne\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__ne__) |  x.\__ne\__(y) <=> x!=y <=> mx.nd.not_equal(x, y) |
 
 The current workaround is to use corresponding broadcast operators for arithmetic and comparison to avoid potential hybridization failure when input shapes are different.
 
 | Symbol APIs  | Description  |
 |---|---|
-|[*broadcast_add*](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_add) | Returns element-wise sum of the input arrays with broadcasting. |
-|[*broadcast_sub*](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_sub) | Returns element-wise difference of the input arrays with broadcasting. |
-|[*broadcast_mul*](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_mul) | Returns element-wise product of the input arrays with broadcasting. |
-|[*broadcast_div*](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_div) | Returns element-wise division of the input arrays with broadcasting. |
-|[*broadcast_mod*](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_mod) | Returns element-wise modulo of the input arrays with broadcasting. |
-|[*broadcast_equal*](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_equal) | Returns the result of element-wise *equal to* (==) comparison operation with broadcasting. |
-|[*broadcast_not_equal*](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_not_equal) | Returns the result of element-wise *not equal to* (!=) comparison operation with broadcasting. |
-|[*broadcast_greater*](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_greater) | Returns the result of element-wise *greater than* (>) comparison operation with broadcasting. |
-|[*broadcast_greater_equal*](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_greater_equal) | Returns the result of element-wise *greater than or equal to* (>=) comparison operation with broadcasting. |
-|[*broadcast_lesser*](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_lesser) |	Returns the result of element-wise *lesser than* (<) comparison operation with broadcasting. |
-|[*broadcast_lesser_equal*](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_lesser_equal) | Returns the result of element-wise *lesser than or equal to* (<=) comparison operation with broadcasting. |
+|[broadcast_add](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_add) | Returns element-wise sum of the input arrays with broadcasting. |
+|[broadcast_sub](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_sub) | Returns element-wise difference of the input arrays with broadcasting. |
+|[broadcast_mul](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_mul) | Returns element-wise product of the input arrays with broadcasting. |
+|[broadcast_div](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_div) | Returns element-wise division of the input arrays with broadcasting. |
+|[broadcast_mod](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_mod) | Returns element-wise modulo of the input arrays with broadcasting. |
+|[broadcast_equal](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_equal) | Returns the result of element-wise *equal to* (==) comparison operation with broadcasting. |
+|[broadcast_not_equal](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_not_equal) | Returns the result of element-wise *not equal to* (!=) comparison operation with broadcasting. |
+|[broadcast_greater](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_greater) | Returns the result of element-wise *greater than* (>) comparison operation with broadcasting. |
+|[broadcast_greater_equal](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_greater_equal) | Returns the result of element-wise *greater than or equal to* (>=) comparison operation with broadcasting. |
+|[broadcast_lesser](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_lesser) |	Returns the result of element-wise *lesser than* (<) comparison operation with broadcasting. |
+|[broadcast_lesser_equal](https://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.broadcast_lesser_equal) | Returns the result of element-wise *lesser than or equal to* (<=) comparison operation with broadcasting. |
 
 For example, if you want to add a NDarray to your input x, use `broadcast_add` instead of `+`:
 
@@ -196,7 +196,7 @@ If you used `+`, it would still work before hybridization, but will throw an err
 
 Gluon's imperative interface is very flexible and allows you to print the shape of the NDArray. However, Symbol does not have shape attributes. As a result, you need to avoid printing shapes in `hybrid_forward`.
 Otherwise, you will get the following error:
-```bash
+```
 AttributeError: 'Symbol' object has no attribute 'shape'
 ```
 
@@ -230,11 +230,11 @@ For example, avoid writing `x += y` and use `x  = x + y`, otherwise you will get
 
 | NDArray in-place arithmetic operators | Description |
 |---|---|
-|[*NDArray.\__iadd\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__iadd__) |	x.\__iadd\__(y) <=> x+=y |
-|[*NDArray.\__isub\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__isub__) |	x.\__isub\__(y) <=> x-=y |
-|[*NDArray.\__imul\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__imul__) |	x.\__imul\__(y) <=> x*=y |
-|[*NDArray.\__idiv\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__idiv__) |	x.\__rdiv\__(y) <=> x/=y |
-|[*NDArray.\__imod\__*](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__imod__) |	x.\__rmod\__(y) <=> x%=y |
+|[NDArray.\__iadd\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__iadd__) |	x.\__iadd\__(y) <=> x+=y |
+|[NDArray.\__isub\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__isub__) |	x.\__isub\__(y) <=> x-=y |
+|[NDArray.\__imul\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__imul__) |	x.\__imul\__(y) <=> x*=y |
+|[NDArray.\__idiv\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__idiv__) |	x.\__rdiv\__(y) <=> x/=y |
+|[NDArray.\__imod\__](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.__imod__) |	x.\__rmod\__(y) <=> x%=y |
 
 
 
diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index 9457a40..cad9099 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -73,6 +73,7 @@ Select API:&nbsp;
     * [Advanced Learning Rate Schedules](/tutorials/gluon/learning_rate_schedules_advanced.html)
     * [Profiling MXNet Models](/tutorials/python/profiler.html)
     * [Hybridize Gluon models with control flows](/tutorials/control_flow/ControlFlowTutorial.html)
+    * [Gluon end to end from training to inference](/tutorials/gluon/gluon_from_experiment_to_deployment.html)
 * API Guides
     * Core APIs
         * NDArray
@@ -173,6 +174,9 @@ Select API:&nbsp;
 * Backends
     * [Subgraph API](/tutorials/c%2B%2B/subgraphAPI.html)
 
+* Inference
+    * [C++ Inference](/tutorials/c%2B%2B/mxnet_cpp_inference_tutorial.html)
+
 <hr>
 
 ## R Tutorials