You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2021/03/05 14:30:29 UTC

[GitHub] [incubator-mxnet] anko-intel commented on a change in pull request #19944: [RFC] Build with MKL-DNN (or DNNL)

anko-intel commented on a change in pull request #19944:
URL: https://github.com/apache/incubator-mxnet/pull/19944#discussion_r588332103



##########
File path: docs/static_site/src/pages/api/faq/env_var.md
##########
@@ -327,7 +327,7 @@ If ctypes is used, it must be `mxnet._ctypes.ndarray.NDArrayBase`.
 * MXNET_MKLDNN_ENABLED
   - Values: 0, 1 ```(default=1)```
   - Flag to enable or disable MKLDNN accelerator. On by default.
-  - Only applies to mxnet that has been compiled with MKLDNN (```pip install mxnet-mkl``` or built from source with ```USE_MKLDNN=1```)
+  - Only applies to mxnet that has been compiled with MKLDNN (```pip install mxnet-mkl``` or built from source with ```USE_ONEDNN=1```)

Review comment:
       should be ``pip install mxnet`` as oneDNN is enabled by default
   

##########
File path: docs/static_site/src/pages/api/faq/env_var.md
##########
@@ -399,7 +399,7 @@ If ctypes is used, it must be `mxnet._ctypes.ndarray.NDArrayBase`.
   - Values: 0(false) or 1(true) ```(default=1)```
   - If this variable is set, MXNet will simplify the computation graph, eliminating duplicated operations on the same inputs.
 
-* MXNET_USE_MKLDNN_RNN
+* MXNET_USE_ONEDNN_RNN
   - Values: 0(false) or 1(true) ```(default=1)```
   - This variable controls whether to use the MKL-DNN backend in fused RNN operator for CPU context. There are two fusion implementations of RNN operator in MXNet. The MKL-DNN implementation has a better performance than the naive one, but the latter is more stable in the backward operation currently.

Review comment:
       old name

##########
File path: docs/python_docs/python/tutorials/performance/backend/mkldnn/mkldnn_readme.md
##########
@@ -62,18 +62,18 @@ To achieve better performance, the Intel OpenMP and llvm OpenMP are recommended
 ```

Review comment:
       Old MKL-DNN names starting from line 18.
   Maybe the directory and file name could be also changed

##########
File path: CMakeLists.txt
##########
@@ -62,10 +62,14 @@ option(USE_F16C "Build with x86 F16C instruction support" ON) # autodetects supp
 option(USE_LAPACK "Build with lapack support" ON)
 option(USE_MKL_IF_AVAILABLE "Use MKL if found" ON)
 option(USE_MKL_LAYERNORM "Use layer normalization from MKL, which is currently slower than internal.  No effect unless USE_MKL_IF_AVAILABLE is set." OFF)
+if(DEFINED USE_MKLDNN)
+  message(WARNING "USE_MKLDNN is deprecated and will stop being supported soon. Please use USE_ONEDNN instead.")
+  set(USE_ONEDNN ${USE_MKLDNN})
+endif()
 if(USE_MKL_IF_AVAILABLE AND (NOT APPLE) AND (NOT MSVC) AND (CMAKE_HOST_SYSTEM_PROCESSOR STREQUAL "x86_64") AND (NOT CMAKE_CROSSCOMPILING))
-  option(USE_MKLDNN "Build with MKL-DNN support" ON)
+  option(USE_ONEDNN "Build with MKL-DNN support" ON)

Review comment:
       Why one DNN depends on existing of MKL library ? Is oneDNN still dependent on MKL?

##########
File path: docs/static_site/src/pages/api/faq/env_var.md
##########
@@ -327,7 +327,7 @@ If ctypes is used, it must be `mxnet._ctypes.ndarray.NDArrayBase`.
 * MXNET_MKLDNN_ENABLED
   - Values: 0, 1 ```(default=1)```
   - Flag to enable or disable MKLDNN accelerator. On by default.
-  - Only applies to mxnet that has been compiled with MKLDNN (```pip install mxnet-mkl``` or built from source with ```USE_MKLDNN=1```)
+  - Only applies to mxnet that has been compiled with MKLDNN (```pip install mxnet-mkl``` or built from source with ```USE_ONEDNN=1```)

Review comment:
       l329 OneDNN ?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org