You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by tq...@apache.org on 2020/04/23 19:42:33 UTC

[incubator-tvm] branch master updated: [DOCS] Migrate some markdowns to rst, fix sphinx3 warnings (#5416)

This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
     new 1f6c498  [DOCS] Migrate some markdowns to rst, fix sphinx3 warnings (#5416)
1f6c498 is described below

commit 1f6c498bcb37ae7106464075f62aecfbb9d681e4
Author: Tianqi Chen <tq...@users.noreply.github.com>
AuthorDate: Thu Apr 23 12:40:11 2020 -0700

    [DOCS] Migrate some markdowns to rst, fix sphinx3 warnings (#5416)
    
    * [DOCS] Migrate some markdowns to rst, fix sphinx3 warnings
    
    * Add note block
---
 docs/api/python/runtime.rst           |  25 -------
 docs/deploy/android.md                |  39 -----------
 docs/deploy/android.rst               |  42 ++++++++++++
 docs/deploy/cpp_deploy.md             |  52 ---------------
 docs/deploy/cpp_deploy.rst            |  56 ++++++++++++++++
 docs/deploy/integrate.md              |  67 -------------------
 docs/deploy/integrate.rst             |  69 ++++++++++++++++++++
 docs/install/nnpack.md                | 100 ----------------------------
 docs/install/nnpack.rst               | 118 ++++++++++++++++++++++++++++++++++
 tests/scripts/task_sphinx_precheck.sh |   2 +-
 10 files changed, 286 insertions(+), 284 deletions(-)

diff --git a/docs/api/python/runtime.rst b/docs/api/python/runtime.rst
index 30d1b98..c51a2d4 100644
--- a/docs/api/python/runtime.rst
+++ b/docs/api/python/runtime.rst
@@ -23,28 +23,3 @@ tvm.runtime
    :imported-members:
    :exclude-members: NDArray
    :autosummary:
-
-
-.. autoclass:: tvm.runtime.PackedFunc
-   :members:
-   :inherited-members:
-
-.. autofunction:: tvm.register_func
-
-.. autofunction:: tvm.get_global_func
-
-
-.. autoclass:: tvm.runtime.Module
-   :members:
-
-.. autofunction:: tvm.runtime.load_module
-
-.. autofunction:: tvm.runtime.system_lib
-
-.. autofunction:: tvm.runtime.enabled
-
-
-.. autoclass:: tvm.runtime.Object
-   :members:
-
-.. autofunction:: tvm.register_object
diff --git a/docs/deploy/android.md b/docs/deploy/android.md
deleted file mode 100644
index 788ab41..0000000
--- a/docs/deploy/android.md
+++ /dev/null
@@ -1,39 +0,0 @@
-<!--- Licensed to the Apache Software Foundation (ASF) under one -->
-<!--- or more contributor license agreements.  See the NOTICE file -->
-<!--- distributed with this work for additional information -->
-<!--- regarding copyright ownership.  The ASF licenses this file -->
-<!--- to you under the Apache License, Version 2.0 (the -->
-<!--- "License"); you may not use this file except in compliance -->
-<!--- with the License.  You may obtain a copy of the License at -->
-
-<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
-
-<!--- Unless required by applicable law or agreed to in writing, -->
-<!--- software distributed under the License is distributed on an -->
-<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
-<!--- KIND, either express or implied.  See the License for the -->
-<!--- specific language governing permissions and limitations -->
-<!--- under the License. -->
-
-# Deploy to Android
-
-
-## Build model for Android Target
-
-Relay compilation of model for android target could follow same approach like android_rpc.
-The code below will save the compilation output which is required on android target.
-
-```
-lib.export_library("deploy_lib.so", ndk.create_shared)
-with open("deploy_graph.json", "w") as fo:
-    fo.write(graph.json())
-with open("deploy_param.params", "wb") as fo:
-    fo.write(relay.save_param_dict(params))
-```
-
-deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target.
-
-## TVM Runtime for Android Target
-
-Refer [here](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/README.md#build-and-installation) to build CPU/OpenCL version flavor TVM runtime for android target.
-From android java TVM API to load model & execute can be referred at this [java](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java) sample source.
diff --git a/docs/deploy/android.rst b/docs/deploy/android.rst
new file mode 100644
index 0000000..c724eab
--- /dev/null
+++ b/docs/deploy/android.rst
@@ -0,0 +1,42 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Deploy to Android
+=================
+
+Build model for Android Target
+------------------------------
+
+Relay compilation of model for android target could follow same approach like android_rpc.
+The code below will save the compilation output which is required on android target.
+
+
+.. code:: python
+
+    lib.export_library("deploy_lib.so", ndk.create_shared)
+    with open("deploy_graph.json", "w") as fo:
+        fo.write(graph.json())
+    with open("deploy_param.params", "wb") as fo:
+        fo.write(relay.save_param_dict(params))
+
+deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target.
+
+TVM Runtime for Android Target
+------------------------------
+
+Refer `here <https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/README.md#build-and-installation>`_ to build CPU/OpenCL version flavor TVM runtime for android target.
+From android java TVM API to load model & execute can be referred at this `java <https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java>`_ sample source.
diff --git a/docs/deploy/cpp_deploy.md b/docs/deploy/cpp_deploy.md
deleted file mode 100644
index 3a99846..0000000
--- a/docs/deploy/cpp_deploy.md
+++ /dev/null
@@ -1,52 +0,0 @@
-<!--- Licensed to the Apache Software Foundation (ASF) under one -->
-<!--- or more contributor license agreements.  See the NOTICE file -->
-<!--- distributed with this work for additional information -->
-<!--- regarding copyright ownership.  The ASF licenses this file -->
-<!--- to you under the Apache License, Version 2.0 (the -->
-<!--- "License"); you may not use this file except in compliance -->
-<!--- with the License.  You may obtain a copy of the License at -->
-
-<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
-
-<!--- Unless required by applicable law or agreed to in writing, -->
-<!--- software distributed under the License is distributed on an -->
-<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
-<!--- KIND, either express or implied.  See the License for the -->
-<!--- specific language governing permissions and limitations -->
-<!--- under the License. -->
-
-Deploy TVM Module using C++ API
-===============================
-
-We provide an example on how to deploy TVM modules in [apps/howto_deploy](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy)
-
-To run the example, you can use the following command
-
-```bash
-cd apps/howto_deploy
-./run_example.sh
-```
-
-Get TVM Runtime Library
------------------------
-
-The only thing we need is to link to a TVM runtime in your target platform.
-TVM provides a minimum runtime, which costs around 300K to 600K depending on how much modules we use.
-In most cases, we can use ```libtvm_runtime.so``` that comes with the build.
-
-If somehow you find it is hard to build ```libtvm_runtime```, checkout [tvm_runtime_pack.cc](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/tvm_runtime_pack.cc).
-It is an example all in one file that gives you TVM runtime.
-You can compile this file using your build system and include this into your project.
-
-You can also checkout [apps](https://github.com/apache/incubator-tvm/tree/master/apps/) for example applications build with TVM on iOS, Android and others.
-
-Dynamic Library vs. System Module
----------------------------------
-TVM provides two ways to use the compiled library.
-You can checkout [prepare_test_libs.py](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/prepare_test_libs.py)
-on how to generate the library and [cpp_deploy.cc](https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/cpp_deploy.cc) on how to use them.
-
-- Store library as a shared library and dynamically load the library into your project.
-- Bundle the compiled library into your project in system module mode.
-
-Dynamic loading is more flexible and can load new modules on the fly. System module is a more ```static``` approach.  We can use system module in places where dynamic library loading is banned.
diff --git a/docs/deploy/cpp_deploy.rst b/docs/deploy/cpp_deploy.rst
new file mode 100644
index 0000000..a298f95
--- /dev/null
+++ b/docs/deploy/cpp_deploy.rst
@@ -0,0 +1,56 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+
+Deploy TVM Module using C++ API
+===============================
+
+We provide an example on how to deploy TVM modules in `apps/howto_deploy <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy>`_
+
+To run the example, you can use the following command
+
+
+.. code:: bash
+
+    cd apps/howto_deploy
+    ./run_example.sh
+
+
+Get TVM Runtime Library
+-----------------------
+
+The only thing we need is to link to a TVM runtime in your target platform.
+TVM provides a minimum runtime, which costs around 300K to 600K depending on how much modules we use.
+In most cases, we can use ``libtvm_runtime.so`` that comes with the build.
+
+If somehow you find it is hard to build ``libtvm_runtime``, checkout
+`tvm_runtime_pack.cc <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/tvm_runtime_pack.cc>`_.
+It is an example all in one file that gives you TVM runtime.
+You can compile this file using your build system and include this into your project.
+
+You can also checkout `apps <https://github.com/apache/incubator-tvm/tree/master/apps/>`_ for example applications build with TVM on iOS, Android and others.
+
+Dynamic Library vs. System Module
+---------------------------------
+TVM provides two ways to use the compiled library.
+You can checkout `prepare_test_libs.py <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/prepare_test_libs.py>`_
+on how to generate the library and `cpp_deploy.cc <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/cpp_deploy.cc>`_ on how to use them.
+
+- Store library as a shared library and dynamically load the library into your project.
+- Bundle the compiled library into your project in system module mode.
+
+Dynamic loading is more flexible and can load new modules on the fly. System module is a more ``static`` approach.  We can use system module in places where dynamic library loading is banned.
diff --git a/docs/deploy/integrate.md b/docs/deploy/integrate.md
deleted file mode 100644
index 4289614..0000000
--- a/docs/deploy/integrate.md
+++ /dev/null
@@ -1,67 +0,0 @@
-<!--- Licensed to the Apache Software Foundation (ASF) under one -->
-<!--- or more contributor license agreements.  See the NOTICE file -->
-<!--- distributed with this work for additional information -->
-<!--- regarding copyright ownership.  The ASF licenses this file -->
-<!--- to you under the Apache License, Version 2.0 (the -->
-<!--- "License"); you may not use this file except in compliance -->
-<!--- with the License.  You may obtain a copy of the License at -->
-
-<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
-
-<!--- Unless required by applicable law or agreed to in writing, -->
-<!--- software distributed under the License is distributed on an -->
-<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
-<!--- KIND, either express or implied.  See the License for the -->
-<!--- specific language governing permissions and limitations -->
-<!--- under the License. -->
-
-Integrate TVM into Your Project
-===============================
-
-TVM's runtime is designed to be lightweight and portable.
-There are several ways you can integrate TVM into your project.
-
-This article introduces possible ways to integrate TVM
-as a JIT compiler to generate functions on your system.
-
-
-## DLPack Support
-
-TVM's generated function follows the PackedFunc convention.
-It is a function that can take positional arguments including
-standard types such as float, integer, string.
-The PackedFunc takes DLTensor pointer in [dlpack](https://github.com/dmlc/dlpack) convention.
-So the only thing you need to solve is to create a corresponding DLTensor object.
-
-
-
-## Integrate User Defined C++ Array
-
-The only thing we have to do in C++ is to convert your array to DLTensor and pass in its address as
-```DLTensor*``` to the generated function.
-
-
-## Integrate User Defined Python Array
-
-Assume you have a python object ```MyArray```. There are three things that you need to do
-
-- Add ```_tvm_tcode``` field to your array which returns ```tvm.TypeCode.ARRAY_HANDLE```
-- Support ```_tvm_handle``` property in your object, which returns the address of DLTensor in python integer
-- Register this class by ```tvm.register_extension```
-
-```python
-# Example code
-import tvm
-
-class MyArray(object):
-    _tvm_tcode = tvm.TypeCode.ARRAY_HANDLE
-
-    @property
-    def _tvm_handle(self):
-        dltensor_addr = self.get_dltensor_addr()
-        return dltensor_addr
-
-# You can put registration step in a separate file mypkg.tvm.py
-# and only optionally import that if you only want optional dependency.
-tvm.register_extension(MyArray)
-```
diff --git a/docs/deploy/integrate.rst b/docs/deploy/integrate.rst
new file mode 100644
index 0000000..99c968f
--- /dev/null
+++ b/docs/deploy/integrate.rst
@@ -0,0 +1,69 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Integrate TVM into Your Project
+===============================
+
+TVM's runtime is designed to be lightweight and portable.
+There are several ways you can integrate TVM into your project.
+
+This article introduces possible ways to integrate TVM
+as a JIT compiler to generate functions on your system.
+
+
+DLPack Support
+--------------
+
+TVM's generated function follows the PackedFunc convention.
+It is a function that can take positional arguments including
+standard types such as float, integer, string.
+The PackedFunc takes DLTensor pointer in `DLPack <https://github.com/dmlc/dlpack>`_ convention.
+So the only thing you need to solve is to create a corresponding DLTensor object.
+
+
+
+Integrate User Defined C++ Array
+--------------------------------
+
+The only thing we have to do in C++ is to convert your array to DLTensor and pass in its address as
+``DLTensor*`` to the generated function.
+
+
+## Integrate User Defined Python Array
+
+Assume you have a python object ``MyArray``. There are three things that you need to do
+
+- Add ``_tvm_tcode`` field to your array which returns ``tvm.TypeCode.ARRAY_HANDLE``
+- Support ``_tvm_handle`` property in your object, which returns the address of DLTensor in python integer
+- Register this class by ``tvm.register_extension``
+
+.. code:: python
+
+   # Example code
+   import tvm
+
+   class MyArray(object):
+       _tvm_tcode = tvm.TypeCode.ARRAY_HANDLE
+
+       @property
+       def _tvm_handle(self):
+           dltensor_addr = self.get_dltensor_addr()
+           return dltensor_addr
+
+       # You can put registration step in a separate file mypkg.tvm.py
+       # and only optionally import that if you only want optional dependency.
+  tvm.register_extension(MyArray)
diff --git a/docs/install/nnpack.md b/docs/install/nnpack.md
deleted file mode 100644
index e1bcb70..0000000
--- a/docs/install/nnpack.md
+++ /dev/null
@@ -1,100 +0,0 @@
-<!--- Licensed to the Apache Software Foundation (ASF) under one -->
-<!--- or more contributor license agreements.  See the NOTICE file -->
-<!--- distributed with this work for additional information -->
-<!--- regarding copyright ownership.  The ASF licenses this file -->
-<!--- to you under the Apache License, Version 2.0 (the -->
-<!--- "License"); you may not use this file except in compliance -->
-<!--- with the License.  You may obtain a copy of the License at -->
-
-<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
-
-<!--- Unless required by applicable law or agreed to in writing, -->
-<!--- software distributed under the License is distributed on an -->
-<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
-<!--- KIND, either express or implied.  See the License for the -->
-<!--- specific language governing permissions and limitations -->
-<!--- under the License. -->
-
-# NNPACK Contrib Installation
-
-[NNPACK](https://github.com/Maratyszcza/NNPACK) is an acceleration package
-for neural network computations, which can run on x86-64, ARMv7, or ARM64 architecture CPUs.
-Using NNPACK, higher-level libraries like _MXNet_ can speed up
-the execution on multi-core CPU computers, including laptops and mobile devices.
-
-***Note***: AS TVM already has natively tuned schedules, NNPACK is here mainly for reference and comparison purpose.
-For regular use prefer native tuned TVM implementation.
-
-_TVM_ supports NNPACK for forward propagation (inference only) in convolution, max-pooling, and fully-connected layers.
-In this document, we give a high level overview of how to use NNPACK with _TVM_.
-
-## Conditions
-The underlying implementation of NNPACK utilizes several acceleration methods,
-including [fft](https://arxiv.org/abs/1312.5851) and [winograd](https://arxiv.org/abs/1509.09308).
-These algorithms work better on some special `batch size`, `kernel size`, and `stride` settings than on other,
-so depending on the context, not all convolution, max-pooling, or fully-connected layers can be powered by NNPACK.
-When favorable conditions for running NNPACKS are not met,
-
-NNPACK only supports Linux and OS X systems. Windows is not supported at present.
-
-## Build/Install NNPACK
-
-If the trained model meets some conditions of using NNPACK,
-you can build TVM with NNPACK support.
-Follow these simple steps:
-* Build NNPACK shared library with the following commands. _TVM_ will link NNPACK dynamically.
-
-Note: The following NNPACK installation instructions have been tested on Ubuntu 16.04.
-
-### Build [Ninja](https://ninja-build.org/)
-
-NNPACK need a recent version of Ninja. So we need to install ninja from source.
-```bash
-git clone git://github.com/ninja-build/ninja.git
-cd ninja
-./configure.py --bootstrap
-```
-
-Set the environment variable PATH to tell bash where to find the ninja executable. For example, assume we cloned ninja on the home directory ~. then we can added the following line in ~/.bashrc.
-```bash
-export PATH="${PATH}:~/ninja"
-```
-
-### Build [NNPACK](https://github.com/Maratyszcza/NNPACK)
-
-The new CMAKE version of NNPACK download [Peach](https://github.com/Maratyszcza/PeachPy) and other dependencies alone
-
-Note: at least on OS X, running `ninja install` below will overwrite googletest libraries installed in `/usr/local/lib`. If you build googletest again to replace the nnpack copy, be sure to pass `-DBUILD_SHARED_LIBS=ON` to `cmake`.
-
-```bash
-git clone --recursive https://github.com/Maratyszcza/NNPACK.git
-cd NNPACK
-# Add PIC option in CFLAG and CXXFLAG to build NNPACK shared library
-sed -i "s|gnu99|gnu99 -fPIC|g" CMakeLists.txt
-sed -i "s|gnu++11|gnu++11 -fPIC|g" CMakeLists.txt
-mkdir build
-cd build
-# Generate ninja build rule and add shared library in configuration
-cmake -G Ninja -D BUILD_SHARED_LIBS=ON ..
-ninja
-sudo ninja install
-
-# Add NNPACK lib folder in your ldconfig
-echo "/usr/local/lib" > /etc/ld.so.conf.d/nnpack.conf
-sudo ldconfig
-```
-
-## Build TVM with NNPACK support
-
-```bash
-git clone --recursive https://github.com/apache/incubator-tvm tvm
-```
-
-* Set `set(USE_NNPACK ON)` in config.cmake.
-* Set `NNPACK_PATH` to the $(YOUR_NNPACK_INSTALL_PATH)
-
-after configuration use `make` to build TVM
-
-```bash
-make
-```
diff --git a/docs/install/nnpack.rst b/docs/install/nnpack.rst
new file mode 100644
index 0000000..10497ba
--- /dev/null
+++ b/docs/install/nnpack.rst
@@ -0,0 +1,118 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+
+NNPACK Contrib Installation
+===========================
+
+`NNPACK <https://github.com/Maratyszcza/NNPACK>`_ is an acceleration package
+for neural network computations, which can run on x86-64, ARMv7, or ARM64 architecture CPUs.
+Using NNPACK, higher-level libraries like _MXNet_ can speed up
+the execution on multi-core CPU computers, including laptops and mobile devices.
+
+.. note::
+
+   AS TVM already has natively tuned schedules, NNPACK is here mainly for reference and comparison purpose.
+   For regular use prefer native tuned TVM implementation.
+
+TVM supports NNPACK for forward propagation (inference only) in convolution, max-pooling, and fully-connected layers.
+In this document, we give a high level overview of how to use NNPACK with TVM.
+
+Conditions
+----------
+
+The underlying implementation of NNPACK utilizes several acceleration methods,
+including fft and winograd.
+These algorithms work better on some special `batch size`, `kernel size`, and `stride` settings than on other,
+so depending on the context, not all convolution, max-pooling, or fully-connected layers can be powered by NNPACK.
+When favorable conditions for running NNPACKS are not met,
+
+NNPACK only supports Linux and OS X systems. Windows is not supported at present.
+
+Build/Install NNPACK
+--------------------
+
+If the trained model meets some conditions of using NNPACK,
+you can build TVM with NNPACK support.
+Follow these simple steps:
+
+uild NNPACK shared library with the following commands. TVM will link NNPACK dynamically.
+
+Note: The following NNPACK installation instructions have been tested on Ubuntu 16.04.
+
+Build Ninja
+~~~~~~~~~~~
+
+NNPACK need a recent version of Ninja. So we need to install ninja from source.
+
+.. code:: bash
+
+   git clone git://github.com/ninja-build/ninja.git
+   cd ninja
+   ./configure.py --bootstrap
+
+
+Set the environment variable PATH to tell bash where to find the ninja executable. For example, assume we cloned ninja on the home directory ~. then we can added the following line in ~/.bashrc.
+
+
+.. code:: bash
+
+   export PATH="${PATH}:~/ninja"
+
+
+Build NNPACK
+~~~~~~~~~~~~
+
+The new CMAKE version of NNPACK download `Peach <https://github.com/Maratyszcza/PeachPy>`_ and other dependencies alone
+
+Note: at least on OS X, running `ninja install` below will overwrite googletest libraries installed in `/usr/local/lib`. If you build googletest again to replace the nnpack copy, be sure to pass `-DBUILD_SHARED_LIBS=ON` to `cmake`.
+
+.. code:: bash
+
+   git clone --recursive https://github.com/Maratyszcza/NNPACK.git
+   cd NNPACK
+   # Add PIC option in CFLAG and CXXFLAG to build NNPACK shared library
+   sed -i "s|gnu99|gnu99 -fPIC|g" CMakeLists.txt
+   sed -i "s|gnu++11|gnu++11 -fPIC|g" CMakeLists.txt
+   mkdir build
+   cd build
+   # Generate ninja build rule and add shared library in configuration
+   cmake -G Ninja -D BUILD_SHARED_LIBS=ON ..
+   ninja
+   sudo ninja install
+
+   # Add NNPACK lib folder in your ldconfig
+   echo "/usr/local/lib" > /etc/ld.so.conf.d/nnpack.conf
+   sudo ldconfig
+
+
+Build TVM with NNPACK support
+-----------------------------
+
+.. code:: bash
+
+   git clone --recursive https://github.com/apache/incubator-tvm tvm
+
+- Set `set(USE_NNPACK ON)` in config.cmake.
+- Set `NNPACK_PATH` to the $(YOUR_NNPACK_INSTALL_PATH)
+
+after configuration use `make` to build TVM
+
+
+.. code:: bash
+
+   make
diff --git a/tests/scripts/task_sphinx_precheck.sh b/tests/scripts/task_sphinx_precheck.sh
index 6709b28..0c82b2c 100755
--- a/tests/scripts/task_sphinx_precheck.sh
+++ b/tests/scripts/task_sphinx_precheck.sh
@@ -42,7 +42,7 @@ cd docs
 make clean
 TVM_TUTORIAL_EXEC_PATTERN=none make html 2>/tmp/$$.log.txt
 
-grep -v -E "__mro__|RemovedInSphinx|UserWarning|FutureWarning|Keras" < /tmp/$$.log.txt > /tmp/$$.logclean.txt || true
+grep -v -E "__mro__|RemovedIn|UserWarning|FutureWarning|Keras" < /tmp/$$.log.txt > /tmp/$$.logclean.txt || true
 echo "---------Sphinx Log----------"
 cat /tmp/$$.logclean.txt
 echo "-----------------------------"