You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2021/10/04 10:24:00 UTC

[GitHub] [incubator-mxnet] anko-intel commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

anko-intel commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721203019



##########
File path: cpp-package/example/inference/README.md
##########
@@ -27,7 +27,7 @@ This directory contains following examples. In order to run the examples, ensure
 
 ## [imagenet_inference.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/imagenet_inference.cpp>)
 
-This example demonstrates image classification workflow with pre-trained models using MXNet C++ API. Now this script also supports inference with quantized CNN models generated by IntelĀ® MKL-DNN (see this [quantization flow](https://github.com/apache/incubator-mxnet/blob/master/example/quantization/README.md)). By using C++ API, the latency of most models will be reduced to some extent compared with current Python implementation.
+This example demonstrates image classification workflow with pre-trained models using MXNet C++ API. Now this script also supports inference with quantized CNN models generated by IntelĀ® DNNL (see this [quantization flow](https://github.com/apache/incubator-mxnet/blob/master/example/quantization/README.md)). By using C++ API, the latency of most models will be reduced to some extent compared with current Python implementation.

Review comment:
       oneDNN as official library name?

##########
File path: docs/python_docs/python/tutorials/index.rst
##########
@@ -84,10 +84,10 @@ Performance
       How to use int8 in your model to boost training speed.
 
    .. card::
-      :title: MKL-DNN
+      :title: DNNL

Review comment:
       oneDNN

##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_readme.md
##########
@@ -208,9 +208,9 @@ o = exe.outputs[0]
 t = o.asnumpy()
 ```
 
-More detailed debugging and profiling information can be logged by setting the environment variable 'MKLDNN_VERBOSE':
+More detailed debugging and profiling information can be logged by setting the environment variable 'DNNL_VERBOSE':
 ```
-export MKLDNN_VERBOSE=1
+export DNNL_VERBOSE=1
 ```
 For example, by running above code snippet, the following debugging logs providing more insights on ONEDNN primitives `convolution` and `reorder`. That includes: Memory layout, infer shape and the time cost of primitive execution.

Review comment:
       ```suggestion
   For example, by running above code snippet, the following debugging logs providing more insights on oneDNN primitives `convolution` and `reorder`. That includes: Memory layout, infer shape and the time cost of primitive execution.
   ```

##########
File path: docs/python_docs/python/tutorials/index.rst
##########
@@ -84,10 +84,10 @@ Performance
       How to use int8 in your model to boost training speed.
 
    .. card::
-      :title: MKL-DNN
+      :title: DNNL
       :link: performance/backend/mkldnn/index.html
 
-      How to get the most from your CPU by using Intel's MKL-DNN.
+      How to get the most from your CPU by using Intel's DNNL.

Review comment:
       oneDNN

##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL
 =============
 
 .. container:: cards
 
    .. card::
-      :title: MKL-DNN Installation and Verification
-      :link: mkldnn_readme
+      :title: DNNL Installation and Verification
+      :link: dnnl_readme
 
-      A guide on using MKL-DNN with MXNet.
+      A guide on using DNNL with MXNet.
 
    .. card::
-      :title: MKL-DNN Quantization
-      :link: mkldnn_quantization
+      :title: DNNL Quantization
+      :link: dnnl_quantization
 
-      How to perform quantization with MKLDNN
+      How to perform quantization with DNNL

Review comment:
       ```suggestion
         How to perform quantization with oneDNN
   ```

##########
File path: docs/python_docs/python/tutorials/performance/index.rst
##########
@@ -76,10 +76,10 @@ Accelerated Backend
    ..
       TBD Content
       .. card::
-         :title: MKL-DNN
+         :title: DNNL

Review comment:
       ```suggestion
            :title: oneDNN
   ```

##########
File path: docs/static_site/src/pages/api/faq/env_var.md
##########
@@ -375,7 +375,7 @@ If ctypes is used, it must be `mxnet._ctypes.ndarray.NDArrayBase`.
   - This variable controls how many CuDNN dropout state resources to create for each GPU context for use in operator.
 
 * MXNET_SUBGRAPH_BACKEND
-  - Values: String ```(default="MKLDNN")``` if ONEDNN is avaliable, otherwise ```(default="")```
+  - Values: String ```(default="DNNL")``` if ONEDNN is avaliable, otherwise ```(default="")```

Review comment:
       ```suggestion
     - Values: String ```(default="DNNL")``` if oneDNN is available, otherwise ```(default="")```
   ```

##########
File path: docs/static_site/src/pages/api/faq/env_var.md
##########
@@ -375,7 +375,7 @@ If ctypes is used, it must be `mxnet._ctypes.ndarray.NDArrayBase`.
   - This variable controls how many CuDNN dropout state resources to create for each GPU context for use in operator.
 
 * MXNET_SUBGRAPH_BACKEND
-  - Values: String ```(default="MKLDNN")``` if ONEDNN is avaliable, otherwise ```(default="")```
+  - Values: String ```(default="DNNL")``` if ONEDNN is avaliable, otherwise ```(default="")```
   - This variable controls the subgraph partitioning in MXNet.
   - This variable is used to perform ONEDNN FP32 operator fusion and quantization. Please refer to the [ONEDNN operator list](https://github.com/apache/incubator-mxnet/blob/v1.5.x/docs/tutorials/mkldnn/operator_list.md) for how this variable is used and the list of fusion passes.

Review comment:
       ```suggestion
     - This variable is used to perform oneDNN FP32 operator fusion and quantization. Please refer to the [ONEDNN operator list](https://github.com/apache/incubator-mxnet/blob/v1.5.x/docs/tutorials/dnnl/operator_list.md) for how this variable is used and the list of fusion passes.
   ```
   please double check directory name

##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL
 =============
 
 .. container:: cards
 
    .. card::
-      :title: MKL-DNN Installation and Verification
-      :link: mkldnn_readme
+      :title: DNNL Installation and Verification
+      :link: dnnl_readme
 
-      A guide on using MKL-DNN with MXNet.
+      A guide on using DNNL with MXNet.

Review comment:
       ```suggestion
         A guide on using oneDNN with MXNet.
   ```

##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL
 =============
 
 .. container:: cards
 
    .. card::
-      :title: MKL-DNN Installation and Verification
-      :link: mkldnn_readme
+      :title: DNNL Installation and Verification

Review comment:
       ```suggestion
         :title: oneDNN Installation and Verification
   ```

##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL

Review comment:
       ```suggestion
   Intel oneDNN
   ```

##########
File path: docs/python_docs/python/tutorials/performance/backend/index.rst
##########
@@ -22,10 +22,10 @@ The following tutorials will help you learn how to use backend tools to boost pe
 .. container:: cards
 
   .. card::
-     :title: MKL-DNN
-     :link: mkldnn/index.html
+     :title: DNNL

Review comment:
       ```suggestion
        :title: oneDNN
   ```

##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL
 =============
 
 .. container:: cards
 
    .. card::
-      :title: MKL-DNN Installation and Verification
-      :link: mkldnn_readme
+      :title: DNNL Installation and Verification
+      :link: dnnl_readme
 
-      A guide on using MKL-DNN with MXNet.
+      A guide on using DNNL with MXNet.
 
    .. card::
-      :title: MKL-DNN Quantization
-      :link: mkldnn_quantization
+      :title: DNNL Quantization

Review comment:
       ```suggestion
         :title: oneDNN Quantization
   ```

##########
File path: cd/python/pypi/pypi_package.sh
##########
@@ -22,11 +22,10 @@ set -ex
 export mxnet_variant=${1:?"Please specify the mxnet variant"}
 
 # Due to this PR: https://github.com/apache/incubator-mxnet/pull/14899
-# The setup.py expects that mkldnn_version.h be present in
+# The setup.py expects that dnnl_version.h be present in
 # mxnet-build/3rdparty/onednn/build/install/include
 # The artifact repository stores this file in the dependencies
 # and CD unpacks it to a directory called cd_misc
-# Nov. 2019 Update: With v1.1, MKL-DNN is renaming to DNNL. Hence changing the prefix of file name.
 if [ -f "cd_misc/dnnl_version.h" ]; then
   mkdir -p 3rdparty/onednn/include/oneapi/dnnl
   cp cd_misc/dnnl_version.h 3rdparty/onednn/include/oneapi/dnnl/.

Review comment:
       check directory here onednn dnnl ?

##########
File path: docs/python_docs/python/tutorials/performance/index.rst
##########
@@ -76,10 +76,10 @@ Accelerated Backend
    ..
       TBD Content
       .. card::
-         :title: MKL-DNN
+         :title: DNNL
          :link: backend/mkldnn/mkldnn_readme
 
-         How to get the most from your CPU by using Intel's MKL-DNN.
+         How to get the most from your CPU by using Intel's DNNL.

Review comment:
       ```suggestion
            How to get the most from your CPU by using Intel's oneDNN.
   ```

##########
File path: README.md
##########
@@ -88,7 +88,7 @@ What's New
 
 ### Ecosystem News
 
-* [ONEDNN for Faster CPU Performance](docs/python_docs/python/tutorials/performance/backend/mkldnn/mkldnn_readme.md)
+* [ONEDNN for Faster CPU Performance](docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_readme.md)

Review comment:
       ```suggestion
   * [oneDNN for Faster CPU Performance](docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_readme.md)
   ```

##########
File path: ci/docker/runtime_functions.sh
##########
@@ -763,7 +763,7 @@ cd_unittest_ubuntu() {
     fi
 
     if [[ ${mxnet_variant} = *mkl ]]; then

Review comment:
       "*mkl" seems to be not valid from mxnet1.8  or 1.7?  but I am not sure if in this context also

##########
File path: docs/python_docs/python/tutorials/performance/backend/profiler.md
##########
@@ -211,11 +211,11 @@ Let's zoom in to check the time taken by operators
 The above picture visualizes the sequence in which the operators were executed and the time taken by each operator.
 
 ### Profiling ONEDNN Operators
-Reagrding ONEDNN operators, the library has already provided the internal profiling tool. Firstly, you need set `MKLDNN_VERBOSE=1` to enable internal profiler.
+Reagrding ONEDNN operators, the library has already provided the internal profiling tool. Firstly, you need set `DNNL_VERBOSE=1` to enable internal profiler.
 
-`$ MKLDNN_VERBOSE=1 python my_script.py > mkldnn_verbose.log`
+`$ DNNL_VERBOSE=1 python my_script.py > dnnl_verbose.log`
 
-Now, the detailed profiling insights of each ONEDNN prmitive are saved into `mkldnn_verbose.log` (like below).
+Now, the detailed profiling insights of each ONEDNN prmitive are saved into `dnnl_verbose.log` (like below).

Review comment:
       ```suggestion
   Now, the detailed profiling insights of each oneDNN primitive are saved into `dnnl_verbose.log` (like below).
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org