You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2021/09/24 11:12:49 UTC

[GitHub] [incubator-mxnet] bartekkuncer opened a new pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

bartekkuncer opened a new pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606


   This change also aims to unify names used to refer to dnnl library in mxnet.
   
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   - [ ] PR's title starts with a category (e.g. [BUGFIX], [MODEL], [TUTORIAL], [FEATURE], [DOC], etc)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] Code is well-documented
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be made.
   - Interesting edge cases to note here
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] vpirogov commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
vpirogov commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721515281



##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -552,9 +552,9 @@ def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
         raise ValueError('currently only supports single ctx, while received %s' % str(ctx))
     if ctx.device_type != 'cpu':
         raise ValueError(
-            'quantize_model_mkldnn only support Intel cpu platform with MKL-DNN Backend')
+            'quantize_model_dnnl only support Intel cpu platform with DNNL Backend')

Review comment:
       I would suggest to use `quantize_model_onednn` instead of `quantie_model_dnnl`. There's no need to introduce naming with `dnnl` unless it was already there.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r723112495



##########
File path: src/operator/subgraph/dnnl/dnnl_fc.cc
##########
@@ -654,14 +651,14 @@ static bool SgMKLDNNAvoidFCQuantizeInput(const NodeAttrs& attrs,
   return avoid_indexes.count(index_to_check);
 }
 
-NNVM_REGISTER_OP(_sg_mkldnn_fully_connected)
-    .describe(R"code(_sg_mkldnn_fully_connected)code" ADD_FILELINE)
+NNVM_REGISTER_OP(_sg_dnnl_fully_connected)

Review comment:
       > please add alias with the old name, it could be done in separate PR
   
   I will create separate JIRA ticket for that.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724248783



##########
File path: src/operator/subgraph/dnnl/dnnl_conv_property.h
##########
@@ -170,25 +165,25 @@ class SgMKLDNNConvSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNConvSelector(
+    auto new_selector = SgDNNLConvSelector(
         disable_all_, disable_conv_bn_, disable_conv_act_, disable_conv_sum_, quantize_);
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNConvProperty : public SubgraphProperty {
+class SgDNNLConvProperty : public SubgraphProperty {
  public:
-  SgMKLDNNConvProperty() {
+  SgDNNLConvProperty() {
     disable_conv_bn_  = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_CONV_BN", 0);
     disable_conv_act_ = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_CONV_RELU", 0);
     disable_conv_sum_ = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_CONV_SUM", 0);
 
     disable_all_ = disable_conv_bn_ && disable_conv_act_ && disable_conv_sum_;
   }
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN convolution optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNConvProperty>();
+    static const std::string& name = "DNNL convolution optimization pass";

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_conv_property.h
##########
@@ -204,7 +199,7 @@ class SgMKLDNNConvProperty : public SubgraphProperty {
     nnvm::Symbol new_sym;
     new_sym.outputs.emplace_back(last_node);
     std::ostringstream node_name;
-    node_name << "sg_mkldnn_";
+    node_name << "sg_dnnl_";

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-938416482


   @mxnet-bot run ci [centos-gpu, windows-gpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mozga-intel commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mozga-intel commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-942383847


   @szha Could you please review and help with the merge? Thanks! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724252312



##########
File path: src/operator/subgraph/dnnl/dnnl_transformer.cc
##########
@@ -123,7 +122,7 @@ class SgMKLDNNSelfAttQKOp {
                 const std::vector<NDArray>& inputs,
                 const std::vector<OpReqType>& req,
                 const std::vector<NDArray>& outputs) {
-    LOG(FATAL) << "Not implemented: subgraph mkldnn self attention qk only supports "
+    LOG(FATAL) << "Not implemented: subgraph dnnl self attention qk only supports "

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer.cc
##########
@@ -490,7 +489,7 @@ class MKLDNNSelfAttValAttOp {
                 const std::vector<NDArray>& inputs,
                 const std::vector<OpReqType>& req,
                 const std::vector<NDArray>& outputs) {
-    LOG(FATAL) << "Not implemented: subgraph mkldnn self attention val only supports "
+    LOG(FATAL) << "Not implemented: subgraph dnnl self attention val only supports "

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724254155



##########
File path: src/operator/subgraph/dnnl/dnnl_transformer_qk_property.h
##########
@@ -153,22 +152,22 @@ class SgMKLDNNTransformerQKSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNTransformerQKSelector();
+    auto new_selector = SgDNNLTransformerQKSelector();
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNTransformerQKProperty : public SubgraphProperty {
+class SgDNNLTransformerQKProperty : public SubgraphProperty {
  public:
-  SgMKLDNNTransformerQKProperty() {}
+  SgDNNLTransformerQKProperty() {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN Transformer optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNTransformerQKProperty>();
+    static const std::string& name = "DNNL Transformer optimization pass";
+    auto property                  = std::make_shared<SgDNNLTransformerQKProperty>();
     property->SetAttr<std::string>("property_name", name);
     property->SetAttr<bool>("inference_only", true);
-    if (dmlc::GetEnv("MXNET_DISABLE_MKLDNN_TRANSFORMER_OPT", 0)) {
+    if (dmlc::GetEnv("MXNET_DISABLE_DNNL_TRANSFORMER_OPT", 0)) {

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] vpirogov commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
vpirogov commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-936500546


   @bartekkuncer, there's no plan to change the API, so `dnnl` prefix will stay there. I would suggest to stick to `oneDNN` name for all things that are visible to MXNet users though.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-942158948


   Jenkins CI successfully triggered : [unix-cpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721238584



##########
File path: docs/static_site/src/pages/api/faq/env_var.md
##########
@@ -375,7 +375,7 @@ If ctypes is used, it must be `mxnet._ctypes.ndarray.NDArrayBase`.
   - This variable controls how many CuDNN dropout state resources to create for each GPU context for use in operator.
 
 * MXNET_SUBGRAPH_BACKEND
-  - Values: String ```(default="MKLDNN")``` if ONEDNN is avaliable, otherwise ```(default="")```
+  - Values: String ```(default="DNNL")``` if ONEDNN is avaliable, otherwise ```(default="")```
   - This variable controls the subgraph partitioning in MXNet.
   - This variable is used to perform ONEDNN FP32 operator fusion and quantization. Please refer to the [ONEDNN operator list](https://github.com/apache/incubator-mxnet/blob/v1.5.x/docs/tutorials/mkldnn/operator_list.md) for how this variable is used and the list of fusion passes.

Review comment:
       This is an old directory name, left here on purpose.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] anko-intel commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
anko-intel commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721308210



##########
File path: src/operator/subgraph/dnnl/dnnl_fc.cc
##########
@@ -654,14 +651,14 @@ static bool SgMKLDNNAvoidFCQuantizeInput(const NodeAttrs& attrs,
   return avoid_indexes.count(index_to_check);
 }
 
-NNVM_REGISTER_OP(_sg_mkldnn_fully_connected)
-    .describe(R"code(_sg_mkldnn_fully_connected)code" ADD_FILELINE)
+NNVM_REGISTER_OP(_sg_dnnl_fully_connected)

Review comment:
       please add alias with the old name, it could be done in separate PR




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-939992859


   Jenkins CI successfully triggered : [website, centos-gpu, windows-gpu, edge, windows-cpu, miscellaneous, unix-gpu, centos-cpu, clang, sanity, unix-cpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724212020



##########
File path: docs/static_site/src/pages/api/faq/env_var.md
##########
@@ -375,7 +375,7 @@ If ctypes is used, it must be `mxnet._ctypes.ndarray.NDArrayBase`.
   - This variable controls how many CuDNN dropout state resources to create for each GPU context for use in operator.
 
 * MXNET_SUBGRAPH_BACKEND
-  - Values: String ```(default="MKLDNN")``` if ONEDNN is avaliable, otherwise ```(default="")```
+  - Values: String ```(default="DNNL")``` if ONEDNN is avaliable, otherwise ```(default="")```

Review comment:
       Done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724215912



##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -552,9 +552,9 @@ def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
         raise ValueError('currently only supports single ctx, while received %s' % str(ctx))
     if ctx.device_type != 'cpu':
         raise ValueError(
-            'quantize_model_mkldnn only support Intel cpu platform with MKL-DNN Backend')
+            'quantize_model_dnnl only support Intel cpu platform with DNNL Backend')
 
-    sym = sym.optimize_for(backend='MKLDNN_QUANTIZE')
+    sym = sym.optimize_for(backend='DNNL_QUANTIZE')

Review comment:
       Done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-938530351


   @mxnet-bot run ci [centos-gpu, windows-gpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-938416560


   Jenkins CI successfully triggered : [centos-gpu, windows-gpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] PawelGlomski-Intel commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
PawelGlomski-Intel commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721244150



##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -527,13 +527,13 @@ def quantize_model(sym, arg_params, aux_params, data_names=('data',),
 
     return qsym, qarg_params, aux_params
 
-def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
-                          ctx=cpu(), excluded_sym_names=None, excluded_op_names=None,
-                          calib_mode='entropy', calib_data=None, num_calib_batches=None,
-                          quantized_dtype='int8', quantize_mode='smart',
-                          quantize_granularity='tensor-wise', logger=None):
+def quantize_model_dnnl(sym, arg_params, aux_params, data_names=('data',),
+                        ctx=cpu(), excluded_sym_names=None, excluded_op_names=None,
+                        calib_mode='entropy', calib_data=None, num_calib_batches=None,
+                        quantized_dtype='int8', quantize_mode='smart',
+                        quantize_granularity='tensor-wise', logger=None):
     """User-level API for generating a fusion + quantized model from a FP32 model
-    w/ or w/o calibration with Intel MKL-DNN.
+    w/ or w/o calibration with Intel DNNL.

Review comment:
       ```suggestion
       w/ or w/o calibration with oneDNN.
   ```

##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -552,9 +552,9 @@ def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
         raise ValueError('currently only supports single ctx, while received %s' % str(ctx))
     if ctx.device_type != 'cpu':
         raise ValueError(
-            'quantize_model_mkldnn only support Intel cpu platform with MKL-DNN Backend')
+            'quantize_model_dnnl only support Intel cpu platform with DNNL Backend')

Review comment:
       ```suggestion
               'quantize_model_dnnl only support Intel cpu platform with oneDNN Backend')
   ```

##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -552,9 +552,9 @@ def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
         raise ValueError('currently only supports single ctx, while received %s' % str(ctx))
     if ctx.device_type != 'cpu':
         raise ValueError(
-            'quantize_model_mkldnn only support Intel cpu platform with MKL-DNN Backend')
+            'quantize_model_dnnl only support Intel cpu platform with DNNL Backend')
 
-    sym = sym.optimize_for(backend='MKLDNN_QUANTIZE')
+    sym = sym.optimize_for(backend='DNNL_QUANTIZE')

Review comment:
       ```suggestion
       sym = sym.optimize_for(backend='ONEDNN_QUANTIZE')
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-931475336


   Undefined action detected. 
   Permissible actions are : run ci [all], run ci [job1, job2] 
   Example : @mxnet-bot run ci [all] 
   Example : @mxnet-bot run ci [centos-cpu, clang]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] anko-intel commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
anko-intel commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721269412



##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -552,9 +552,9 @@ def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
         raise ValueError('currently only supports single ctx, while received %s' % str(ctx))
     if ctx.device_type != 'cpu':
         raise ValueError(
-            'quantize_model_mkldnn only support Intel cpu platform with MKL-DNN Backend')
+            'quantize_model_dnnl only support Intel cpu platform with DNNL Backend')
 
-    sym = sym.optimize_for(backend='MKLDNN_QUANTIZE')
+    sym = sym.optimize_for(backend='DNNL_QUANTIZE')

Review comment:
       here, @PawelGlomski-Intel , I think DNNL_QUANTIZE should be left as is. This way we have all runtime names with the same DNNL prefix. (but we can discuss offline further, if you wish)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724249505



##########
File path: src/operator/subgraph/dnnl/dnnl_elemwisemul_post_quantize_property.h
##########
@@ -161,7 +160,7 @@ class ElemwiseMulPostQuantizeProperty : public SubgraphProperty {
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN EltwiseMul post-quantization optimization pass";
+    static const std::string& name = "DNNL EltwiseMul post-quantization optimization pass";

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721237231



##########
File path: ci/docker/runtime_functions.sh
##########
@@ -763,7 +763,7 @@ cd_unittest_ubuntu() {
     fi
 
     if [[ ${mxnet_variant} = *mkl ]]; then

Review comment:
       Not in this context.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer edited a comment on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer edited a comment on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-937865566


   > Review of the files from the `subgraph` commit. For easy-to-find cases, I have provided only one occurrence.
   
   @PawelGlomski-Intel Thanks for the help! :) 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] szha commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
szha commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-937346510


   Vadim's suggestion makes sense.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724252858



##########
File path: src/operator/subgraph/dnnl/dnnl_transformer_qk_property.h
##########
@@ -153,22 +152,22 @@ class SgMKLDNNTransformerQKSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNTransformerQKSelector();
+    auto new_selector = SgDNNLTransformerQKSelector();
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNTransformerQKProperty : public SubgraphProperty {
+class SgDNNLTransformerQKProperty : public SubgraphProperty {
  public:
-  SgMKLDNNTransformerQKProperty() {}
+  SgDNNLTransformerQKProperty() {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN Transformer optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNTransformerQKProperty>();
+    static const std::string& name = "DNNL Transformer optimization pass";

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] PawelGlomski-Intel commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
PawelGlomski-Intel commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724204583



##########
File path: src/operator/subgraph/dnnl/dnnl_bn_relu_property.h
##########
@@ -106,7 +105,7 @@ class SgMKLDNNBNReLUProperty : public SubgraphProperty {
     nnvm::ObjectPtr n = nnvm::Node::Create();
 
     std::ostringstream node_name;
-    node_name << "sg_mkldnn_batch_norm_relu_" << std::to_string(subgraph_id);
+    node_name << "sg_dnnl_batch_norm_relu_" << std::to_string(subgraph_id);

Review comment:
       ```suggestion
       node_name << "sg_onednn_batch_norm_relu_" << std::to_string(subgraph_id);
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_bn_relu_property.h
##########
@@ -91,8 +90,8 @@ class SgMKLDNNBNReLUProperty : public SubgraphProperty {
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN BN + ReLU optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNBNReLUProperty>();
+    static const std::string& name = "DNNL BN + ReLU optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN BN + ReLU optimization pass";
   ```

##########
File path: python/mxnet/amp/lists/symbol_fp16.py
##########
@@ -611,10 +611,10 @@
 
 if Features().is_enabled('ONEDNN'):
     FP32_FUNCS.extend([
-        '_sg_mkldnn_conv',
-        '_sg_mkldnn_fully_connected',
-        '_sg_mkldnn_selfatt_qk',
-        '_sg_mkldnn_selfatt_valatt',
+        '_sg_dnnl_conv',
+        '_sg_dnnl_fully_connected',
+        '_sg_dnnl_selfatt_qk',
+        '_sg_dnnl_selfatt_valatt',

Review comment:
       ```suggestion
           '_sg_onednn_conv',
           '_sg_onednn_fully_connected',
           '_sg_onednn_selfatt_qk',
           '_sg_onednn_selfatt_valatt',
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer.cc
##########
@@ -490,7 +489,7 @@ class MKLDNNSelfAttValAttOp {
                 const std::vector<NDArray>& inputs,
                 const std::vector<OpReqType>& req,
                 const std::vector<NDArray>& outputs) {
-    LOG(FATAL) << "Not implemented: subgraph mkldnn self attention val only supports "
+    LOG(FATAL) << "Not implemented: subgraph dnnl self attention val only supports "

Review comment:
       ```suggestion
       LOG(FATAL) << "Not implemented: subgraph oneDNN self attention val only supports "
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_post_quantize_align_scale_property.h
##########
@@ -117,13 +116,13 @@ class SgMKLDNNConcatPostQuantizeSelector : public SubgraphSelectorV2 {
   std::unordered_set<const nnvm::Node*> visit_list_;
 };
 
-class SgMKLDNNPostQuantizeAlignScaleProperty : public SubgraphProperty {
+class SgDNNLPostQuantizeAlignScaleProperty : public SubgraphProperty {
  public:
-  SgMKLDNNPostQuantizeAlignScaleProperty() : SubgraphProperty(kAdjust) {}
+  SgDNNLPostQuantizeAlignScaleProperty() : SubgraphProperty(kAdjust) {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN post-quantization scale alignment optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNPostQuantizeAlignScaleProperty>();
+    static const std::string& name = "DNNL post-quantization scale alignment optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN post-quantization scale alignment optimization pass";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_subgraph_property.cc
##########
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#if MXNET_USE_ONEDNN == 1
+
+#include "dnnl_bn_relu_property.h"
+#include "dnnl_conv_property.h"
+#include "dnnl_elemwisemul_post_quantize_property.h"
+#include "dnnl_fc_post_quantize_property.h"
+#include "dnnl_fc_property.h"
+#include "dnnl_post_quantize_align_scale_property.h"
+#include "dnnl_post_quantize_property.h"
+#include "dnnl_transformer_post_quantize_property.h"
+#include "dnnl_transformer_qk_property.h"
+#include "dnnl_transformer_valatt_property.h"
+
+namespace mxnet {
+namespace op {
+
+MXNET_REGISTER_SUBGRAPH_BACKEND(DNNL)

Review comment:
       ```suggestion
   MXNET_REGISTER_SUBGRAPH_BACKEND(ONEDNN)
   ```
   This one might be harder to change

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer_qk_property.h
##########
@@ -153,22 +152,22 @@ class SgMKLDNNTransformerQKSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNTransformerQKSelector();
+    auto new_selector = SgDNNLTransformerQKSelector();
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNTransformerQKProperty : public SubgraphProperty {
+class SgDNNLTransformerQKProperty : public SubgraphProperty {
  public:
-  SgMKLDNNTransformerQKProperty() {}
+  SgDNNLTransformerQKProperty() {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN Transformer optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNTransformerQKProperty>();
+    static const std::string& name = "DNNL Transformer optimization pass";
+    auto property                  = std::make_shared<SgDNNLTransformerQKProperty>();
     property->SetAttr<std::string>("property_name", name);
     property->SetAttr<bool>("inference_only", true);
-    if (dmlc::GetEnv("MXNET_DISABLE_MKLDNN_TRANSFORMER_OPT", 0)) {
+    if (dmlc::GetEnv("MXNET_DISABLE_DNNL_TRANSFORMER_OPT", 0)) {

Review comment:
       ```suggestion
       if (dmlc::GetEnv("MXNET_DISABLE_ONEDNN_TRANSFORMER_OPT", 0)) {
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_fc_post_quantize_property.h
##########
@@ -146,22 +145,22 @@ class SgMKLDNNFCPostQuantizeSelector : public SubgraphSelectorV2 {
 
   void Reset() override {
     CHECK_GE(matched_list.size(), 1);
-    auto new_selector = SgMKLDNNFCPostQuantizeSelector(disable_all, disable_float_output);
+    auto new_selector = SgDNNLFCPostQuantizeSelector(disable_all, disable_float_output);
     new_selector.Select(*matched_list[0]);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNFCPostQuantizeProperty : public SubgraphProperty {
+class SgDNNLFCPostQuantizeProperty : public SubgraphProperty {
  public:
-  SgMKLDNNFCPostQuantizeProperty() {
+  SgDNNLFCPostQuantizeProperty() {
     disable_fuse_all     = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_QFC_FUSE_ALL", false);
     disable_float_output = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_QFC_FLOAT_OUTPUT", false);
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN FullyConected post-quantization optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNFCPostQuantizeProperty>();
+    static const std::string& name = "DNNL FullyConected post-quantization optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN FullyConected post-quantization optimization pass";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_post_quantize_property.h
##########
@@ -112,22 +111,22 @@ class SgMKLDNNPostQuantizeSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list.size(), 1);
-    auto new_selector = SgMKLDNNPostQuantizeSelector();
+    auto new_selector = SgDNNLPostQuantizeSelector();
     new_selector.Select(*matched_list[0]);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNPostQuantizeProperty : public SubgraphProperty {
+class SgDNNLPostQuantizeProperty : public SubgraphProperty {
  public:
-  SgMKLDNNPostQuantizeProperty() {
-    support_requantize_fusion_op_name.insert("_sg_mkldnn_conv");
+  SgDNNLPostQuantizeProperty() {
+    support_requantize_fusion_op_name.insert("_sg_dnnl_conv");
     support_requantize_fusion_op_name.insert("_contrib_quantized_elemwise_add");
     support_requantize_fusion_op_name.insert("_contrib_quantized_npi_add");
   }
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN post-quantization optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNPostQuantizeProperty>();
+    static const std::string& name = "DNNL post-quantization optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN post-quantization optimization pass";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer.cc
##########
@@ -123,7 +122,7 @@ class SgMKLDNNSelfAttQKOp {
                 const std::vector<NDArray>& inputs,
                 const std::vector<OpReqType>& req,
                 const std::vector<NDArray>& outputs) {
-    LOG(FATAL) << "Not implemented: subgraph mkldnn self attention qk only supports "
+    LOG(FATAL) << "Not implemented: subgraph dnnl self attention qk only supports "

Review comment:
       ```suggestion
       LOG(FATAL) << "Not implemented: subgraph oneDNN self attention qk only supports "
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_elemwisemul_post_quantize_property.h
##########
@@ -161,7 +160,7 @@ class ElemwiseMulPostQuantizeProperty : public SubgraphProperty {
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN EltwiseMul post-quantization optimization pass";
+    static const std::string& name = "DNNL EltwiseMul post-quantization optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN EltwiseMul post-quantization optimization pass";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_conv_property.h
##########
@@ -204,7 +199,7 @@ class SgMKLDNNConvProperty : public SubgraphProperty {
     nnvm::Symbol new_sym;
     new_sym.outputs.emplace_back(last_node);
     std::ostringstream node_name;
-    node_name << "sg_mkldnn_";
+    node_name << "sg_dnnl_";

Review comment:
       ```suggestion
       node_name << "sg_onednn_";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_conv_property.h
##########
@@ -170,25 +165,25 @@ class SgMKLDNNConvSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNConvSelector(
+    auto new_selector = SgDNNLConvSelector(
         disable_all_, disable_conv_bn_, disable_conv_act_, disable_conv_sum_, quantize_);
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNConvProperty : public SubgraphProperty {
+class SgDNNLConvProperty : public SubgraphProperty {
  public:
-  SgMKLDNNConvProperty() {
+  SgDNNLConvProperty() {
     disable_conv_bn_  = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_CONV_BN", 0);
     disable_conv_act_ = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_CONV_RELU", 0);
     disable_conv_sum_ = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_CONV_SUM", 0);
 
     disable_all_ = disable_conv_bn_ && disable_conv_act_ && disable_conv_sum_;
   }
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN convolution optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNConvProperty>();
+    static const std::string& name = "DNNL convolution optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN convolution optimization pass";
   ```

##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -527,13 +527,13 @@ def quantize_model(sym, arg_params, aux_params, data_names=('data',),
 
     return qsym, qarg_params, aux_params
 
-def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
-                          ctx=cpu(), excluded_sym_names=None, excluded_op_names=None,
-                          calib_mode='entropy', calib_data=None, num_calib_batches=None,
-                          quantized_dtype='int8', quantize_mode='smart',
-                          quantize_granularity='tensor-wise', logger=None):
+def quantize_model_dnnl(sym, arg_params, aux_params, data_names=('data',),

Review comment:
       ```suggestion
   def quantize_model_onednn(sym, arg_params, aux_params, data_names=('data',),
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer_qk_property.h
##########
@@ -153,22 +152,22 @@ class SgMKLDNNTransformerQKSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNTransformerQKSelector();
+    auto new_selector = SgDNNLTransformerQKSelector();
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNTransformerQKProperty : public SubgraphProperty {
+class SgDNNLTransformerQKProperty : public SubgraphProperty {
  public:
-  SgMKLDNNTransformerQKProperty() {}
+  SgDNNLTransformerQKProperty() {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN Transformer optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNTransformerQKProperty>();
+    static const std::string& name = "DNNL Transformer optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN Transformer optimization pass";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_fc_property.h
##########
@@ -156,21 +155,21 @@ class SgMKLDNNFCSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNFCSelector(disable_fc_eltwise_, quantized_);
+    auto new_selector = SgDNNLFCSelector(disable_fc_eltwise_, quantized_);
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNFCProperty : public SubgraphProperty {
+class SgDNNLFCProperty : public SubgraphProperty {
  public:
-  SgMKLDNNFCProperty() {
+  SgDNNLFCProperty() {
     disable_fc_eltwise_ = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_FC_ELTWISE", false);
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN FullyConnected optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNFCProperty>();
+    static const std::string& name = "DNNL FullyConnected optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN FullyConnected optimization pass";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer_valatt_property.h
##########
@@ -227,22 +226,22 @@ class SgMKLDNNTransformerValAttSelector : public SubgraphSelectorV2 {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNTransformerValAttSelector();
+    auto new_selector = SgDNNLTransformerValAttSelector();
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNTransformerValAttProperty : public SubgraphProperty {
+class SgDNNLTransformerValAttProperty : public SubgraphProperty {
  public:
-  SgMKLDNNTransformerValAttProperty() {}
+  SgDNNLTransformerValAttProperty() {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN Transformer optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNTransformerValAttProperty>();
+    static const std::string& name = "DNNL Transformer optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN Transformer optimization pass";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_conv.cc
##########
@@ -686,23 +682,22 @@ static bool SgMKLDNNConvOpStorageType(const nnvm::NodeAttrs& attrs,
   }
 }
 
-std::vector<std::pair<int, int>> SgMKLDNNConvInplaceOption(const NodeAttrs& attrs) {
-  auto const& param = nnvm::get<MKLDNNConvFusionParam>(attrs.parsed);
-  if (param.full_conv_param.mkldnn_param.with_sum &&
-      !param.full_conv_param.mkldnn_param.dedup_sum) {
+std::vector<std::pair<int, int>> SgDNNLConvInplaceOption(const NodeAttrs& attrs) {
+  auto const& param = nnvm::get<DNNLConvFusionParam>(attrs.parsed);
+  if (param.full_conv_param.dnnl_param.with_sum && !param.full_conv_param.dnnl_param.dedup_sum) {
     return std::vector<std::pair<int, int>>{{GetInSumIndex(param), 0}};
   } else {
     return std::vector<std::pair<int, int>>();
   }
 }
 
-nnvm::ObjectPtr SgMKLDNNConvQuantizedOp(const NodeAttrs& attrs) {
-  auto const& param    = nnvm::get<MKLDNNConvFusionParam>(attrs.parsed);
+nnvm::ObjectPtr SgDNNLConvQuantizedOp(const NodeAttrs& attrs) {
+  auto const& param    = nnvm::get<DNNLConvFusionParam>(attrs.parsed);
   nnvm::ObjectPtr node = nnvm::Node::Create();
-  node->attrs.op       = Op::Get("_sg_mkldnn_conv");
+  node->attrs.op       = Op::Get("_sg_dnnl_conv");
   const int k_ndims    = param.full_conv_param.conv_param.kernel.ndim();
   CHECK(k_ndims == 2U || k_ndims == 3U)
-      << "Quantized Convolution of MKL-DNN supports 2D/3D kernel currently."
+      << "Quantized Convolution of DNNL supports 2D/3D kernel currently."

Review comment:
       ```suggestion
         << "Quantized Convolution of oneDNN supports 2D/3D kernel currently."
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724251302



##########
File path: src/operator/subgraph/dnnl/dnnl_post_quantize_property.h
##########
@@ -112,22 +111,22 @@ class SgMKLDNNPostQuantizeSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list.size(), 1);
-    auto new_selector = SgMKLDNNPostQuantizeSelector();
+    auto new_selector = SgDNNLPostQuantizeSelector();
     new_selector.Select(*matched_list[0]);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNPostQuantizeProperty : public SubgraphProperty {
+class SgDNNLPostQuantizeProperty : public SubgraphProperty {
  public:
-  SgMKLDNNPostQuantizeProperty() {
-    support_requantize_fusion_op_name.insert("_sg_mkldnn_conv");
+  SgDNNLPostQuantizeProperty() {
+    support_requantize_fusion_op_name.insert("_sg_dnnl_conv");
     support_requantize_fusion_op_name.insert("_contrib_quantized_elemwise_add");
     support_requantize_fusion_op_name.insert("_contrib_quantized_npi_add");
   }
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN post-quantization optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNPostQuantizeProperty>();
+    static const std::string& name = "DNNL post-quantization optimization pass";

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724248287



##########
File path: src/operator/subgraph/dnnl/dnnl_conv.cc
##########
@@ -686,23 +682,22 @@ static bool SgMKLDNNConvOpStorageType(const nnvm::NodeAttrs& attrs,
   }
 }
 
-std::vector<std::pair<int, int>> SgMKLDNNConvInplaceOption(const NodeAttrs& attrs) {
-  auto const& param = nnvm::get<MKLDNNConvFusionParam>(attrs.parsed);
-  if (param.full_conv_param.mkldnn_param.with_sum &&
-      !param.full_conv_param.mkldnn_param.dedup_sum) {
+std::vector<std::pair<int, int>> SgDNNLConvInplaceOption(const NodeAttrs& attrs) {
+  auto const& param = nnvm::get<DNNLConvFusionParam>(attrs.parsed);
+  if (param.full_conv_param.dnnl_param.with_sum && !param.full_conv_param.dnnl_param.dedup_sum) {
     return std::vector<std::pair<int, int>>{{GetInSumIndex(param), 0}};
   } else {
     return std::vector<std::pair<int, int>>();
   }
 }
 
-nnvm::ObjectPtr SgMKLDNNConvQuantizedOp(const NodeAttrs& attrs) {
-  auto const& param    = nnvm::get<MKLDNNConvFusionParam>(attrs.parsed);
+nnvm::ObjectPtr SgDNNLConvQuantizedOp(const NodeAttrs& attrs) {
+  auto const& param    = nnvm::get<DNNLConvFusionParam>(attrs.parsed);
   nnvm::ObjectPtr node = nnvm::Node::Create();
-  node->attrs.op       = Op::Get("_sg_mkldnn_conv");
+  node->attrs.op       = Op::Get("_sg_dnnl_conv");
   const int k_ndims    = param.full_conv_param.conv_param.kernel.ndim();
   CHECK(k_ndims == 2U || k_ndims == 3U)
-      << "Quantized Convolution of MKL-DNN supports 2D/3D kernel currently."
+      << "Quantized Convolution of DNNL supports 2D/3D kernel currently."

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724247524



##########
File path: src/operator/subgraph/dnnl/dnnl_bn_relu_property.h
##########
@@ -91,8 +90,8 @@ class SgMKLDNNBNReLUProperty : public SubgraphProperty {
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN BN + ReLU optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNBNReLUProperty>();
+    static const std::string& name = "DNNL BN + ReLU optimization pass";

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724246361



##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -527,13 +527,13 @@ def quantize_model(sym, arg_params, aux_params, data_names=('data',),
 
     return qsym, qarg_params, aux_params
 
-def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
-                          ctx=cpu(), excluded_sym_names=None, excluded_op_names=None,
-                          calib_mode='entropy', calib_data=None, num_calib_batches=None,
-                          quantized_dtype='int8', quantize_mode='smart',
-                          quantize_granularity='tensor-wise', logger=None):
+def quantize_model_dnnl(sym, arg_params, aux_params, data_names=('data',),

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-937865566






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-933210935


   > Is this change ready?
   
   Yes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724215554



##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -552,9 +552,9 @@ def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
         raise ValueError('currently only supports single ctx, while received %s' % str(ctx))
     if ctx.device_type != 'cpu':
         raise ValueError(
-            'quantize_model_mkldnn only support Intel cpu platform with MKL-DNN Backend')
+            'quantize_model_dnnl only support Intel cpu platform with DNNL Backend')

Review comment:
       Done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724246542



##########
File path: python/mxnet/amp/lists/symbol_fp16.py
##########
@@ -611,10 +611,10 @@
 
 if Features().is_enabled('ONEDNN'):
     FP32_FUNCS.extend([
-        '_sg_mkldnn_conv',
-        '_sg_mkldnn_fully_connected',
-        '_sg_mkldnn_selfatt_qk',
-        '_sg_mkldnn_selfatt_valatt',
+        '_sg_dnnl_conv',
+        '_sg_dnnl_fully_connected',
+        '_sg_dnnl_selfatt_qk',
+        '_sg_dnnl_selfatt_valatt',

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_bn_relu_property.h
##########
@@ -106,7 +105,7 @@ class SgMKLDNNBNReLUProperty : public SubgraphProperty {
     nnvm::ObjectPtr n = nnvm::Node::Create();
 
     std::ostringstream node_name;
-    node_name << "sg_mkldnn_batch_norm_relu_" << std::to_string(subgraph_id);
+    node_name << "sg_dnnl_batch_norm_relu_" << std::to_string(subgraph_id);

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-939789314


   > https://github.com/apache/incubator-mxnet/blob/4b73646586c128a11345b4f3e1811cc0f3f1bf7d/cmake/BuildTVM.cmake#L138
   > 
   > 
   > USE_MKLDNN => USE_ONEDNN
   
   This is not our project.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r718220715



##########
File path: src/operator/subgraph/dnnl/dnnl_conv.cc
##########
@@ -168,42 +166,42 @@ void SgMKLDNNConvOperator::Forward(const OpContext& ctx,
     }
   }
   CHECK_EQ(input_size, idx);
-  bool has_bias  = mkldnn_param.with_bn || !conv_param.no_bias;
+  bool has_bias  = dnnl_param.with_bn || !conv_param.no_bias;
   NDArray data   = inputs[in_data];
-  NDArray output = mkldnn_param.with_sum ? inputs[in_sum] : outputs[kOut];
+  NDArray output = dnnl_param.with_sum ? inputs[in_sum] : outputs[kOut];
 
   // Copy inputs[in_sum] into outputs[kOut] in case inplace optimization failed.
-  if (mkldnn_param.with_sum) {
+  if (dnnl_param.with_sum) {
     if (!initialized_) {
-      // TODO(zhennan): Currently, mkldnn fallback mechanism will break inplace option,
+      // TODO(zhennan): Currently, dnnl fallback mechanism will break inplace option,
       // which make check (req[kOut] == kWriteInplace) useless.
-      auto in_mkl_mem  = inputs[in_sum].GetMKLDNNData();
-      auto out_mkl_mem = outputs[kOut].GetMKLDNNData();
+      auto in_mkl_mem  = inputs[in_sum].GetDNNLData();
+      auto out_mkl_mem = outputs[kOut].GetDNNLData();
       if (in_mkl_mem->get_data_handle() == out_mkl_mem->get_data_handle()) {
         inplace_ = true;
       }
     }
     if (!inplace_) {
-      auto in_mkl_mem  = inputs[in_sum].GetMKLDNNData();
-      auto out_mkl_mem = outputs[kOut].GetMKLDNNData();
+      auto in_mkl_mem  = inputs[in_sum].GetDNNLData();
+      auto out_mkl_mem = outputs[kOut].GetDNNLData();
       if (outputs[kOut].dtype() == mshadow::kInt32) {
         const auto& mem_desc  = in_mkl_mem->get_desc();
-        const auto this_dtype = get_mkldnn_type(mshadow::kInt32);
+        const auto this_dtype = get_dnnl_type(mshadow::kInt32);
         auto omd              = mem_desc;
-        omd.data.data_type    = static_cast<mkldnn_data_type_t>(this_dtype);
-        mkldnn_mem_ptr tmp_mem(new mkldnn::memory(
-            omd, CpuEngine::Get()->get_engine(), out_mkl_mem->get_data_handle()));
-        MKLDNNStream::Get()->RegisterMem(tmp_mem);
-        MKLDNNStream::Get()->RegisterPrimArgs(
-            mkldnn::reorder(*in_mkl_mem, *tmp_mem),
-            {{MKLDNN_ARG_FROM, *in_mkl_mem}, {MKLDNN_ARG_TO, *tmp_mem}});
+        omd.data.data_type    = static_cast<dnnl_data_type_t>(this_dtype);
+        dnnl_mem_ptr tmp_mem(
+            new dnnl::memory(omd, CpuEngine::Get()->get_engine(), out_mkl_mem->get_data_handle()));

Review comment:
       done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] vpirogov commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
vpirogov commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721516165



##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -552,9 +552,9 @@ def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
         raise ValueError('currently only supports single ctx, while received %s' % str(ctx))
     if ctx.device_type != 'cpu':
         raise ValueError(
-            'quantize_model_mkldnn only support Intel cpu platform with MKL-DNN Backend')
+            'quantize_model_dnnl only support Intel cpu platform with DNNL Backend')
 
-    sym = sym.optimize_for(backend='MKLDNN_QUANTIZE')
+    sym = sym.optimize_for(backend='DNNL_QUANTIZE')

Review comment:
       @anko-intel, I would suggest to aling on `ONEDNN` everywhere.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-942158889


   @mxnet-bot run ci [unix-cpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mozga-intel edited a comment on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mozga-intel edited a comment on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-942383847


   @szha Could you please review it and help with the merge? Thanks! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mozga-intel commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mozga-intel commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-933702891


   Please use `git mv` to move or rename a file, a directory (an example: [link](https://github.com/apache/incubator-mxnet/pull/20606/files#diff-9c23a9af8ecce528f160528e8e2079f5e3b77f33194de47af7c63875fb85ead8))


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r723110890



##########
File path: docs/static_site/src/pages/api/faq/env_var.md
##########
@@ -375,7 +375,7 @@ If ctypes is used, it must be `mxnet._ctypes.ndarray.NDArrayBase`.
   - This variable controls how many CuDNN dropout state resources to create for each GPU context for use in operator.
 
 * MXNET_SUBGRAPH_BACKEND
-  - Values: String ```(default="MKLDNN")``` if ONEDNN is avaliable, otherwise ```(default="")```
+  - Values: String ```(default="DNNL")``` if ONEDNN is avaliable, otherwise ```(default="")```
   - This variable controls the subgraph partitioning in MXNet.
   - This variable is used to perform ONEDNN FP32 operator fusion and quantization. Please refer to the [ONEDNN operator list](https://github.com/apache/incubator-mxnet/blob/v1.5.x/docs/tutorials/mkldnn/operator_list.md) for how this variable is used and the list of fusion passes.

Review comment:
       changed ONEDNN -> oneDNN




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] anko-intel commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
anko-intel commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721267399



##########
File path: cd/python/pypi/pypi_package.sh
##########
@@ -22,11 +22,10 @@ set -ex
 export mxnet_variant=${1:?"Please specify the mxnet variant"}
 
 # Due to this PR: https://github.com/apache/incubator-mxnet/pull/14899
-# The setup.py expects that mkldnn_version.h be present in
+# The setup.py expects that dnnl_version.h be present in
 # mxnet-build/3rdparty/onednn/build/install/include
 # The artifact repository stores this file in the dependencies
 # and CD unpacks it to a directory called cd_misc
-# Nov. 2019 Update: With v1.1, MKL-DNN is renaming to DNNL. Hence changing the prefix of file name.
 if [ -f "cd_misc/dnnl_version.h" ]; then
   mkdir -p 3rdparty/onednn/include/oneapi/dnnl
   cp cd_misc/dnnl_version.h 3rdparty/onednn/include/oneapi/dnnl/.

Review comment:
       sorry, my fault, please ignore this comment




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724254683



##########
File path: src/operator/subgraph/dnnl/dnnl_subgraph_property.cc
##########
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#if MXNET_USE_ONEDNN == 1
+
+#include "dnnl_bn_relu_property.h"
+#include "dnnl_conv_property.h"
+#include "dnnl_elemwisemul_post_quantize_property.h"
+#include "dnnl_fc_post_quantize_property.h"
+#include "dnnl_fc_property.h"
+#include "dnnl_post_quantize_align_scale_property.h"
+#include "dnnl_post_quantize_property.h"
+#include "dnnl_transformer_post_quantize_property.h"
+#include "dnnl_transformer_qk_property.h"
+#include "dnnl_transformer_valatt_property.h"
+
+namespace mxnet {
+namespace op {
+
+MXNET_REGISTER_SUBGRAPH_BACKEND(DNNL)

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-932175323


   Jenkins CI successfully triggered : [centos-cpu, unix-gpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-938530400


   Jenkins CI successfully triggered : [centos-gpu, windows-gpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] vpirogov commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
vpirogov commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r723397326



##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -552,9 +552,9 @@ def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
         raise ValueError('currently only supports single ctx, while received %s' % str(ctx))
     if ctx.device_type != 'cpu':
         raise ValueError(
-            'quantize_model_mkldnn only support Intel cpu platform with MKL-DNN Backend')
+            'quantize_model_dnnl only support Intel cpu platform with DNNL Backend')
 
-    sym = sym.optimize_for(backend='MKLDNN_QUANTIZE')
+    sym = sym.optimize_for(backend='DNNL_QUANTIZE')

Review comment:
       This is a good point. We had this in plan for consistency, but did not implement the change yet. I'll plan this for oneDNN v2.5 release.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r723108898



##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_readme.md
##########
@@ -208,9 +208,9 @@ o = exe.outputs[0]
 t = o.asnumpy()
 ```
 
-More detailed debugging and profiling information can be logged by setting the environment variable 'MKLDNN_VERBOSE':
+More detailed debugging and profiling information can be logged by setting the environment variable 'DNNL_VERBOSE':
 ```
-export MKLDNN_VERBOSE=1
+export DNNL_VERBOSE=1
 ```
 For example, by running above code snippet, the following debugging logs providing more insights on ONEDNN primitives `convolution` and `reorder`. That includes: Memory layout, infer shape and the time cost of primitive execution.

Review comment:
       done

##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] vpirogov commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
vpirogov commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721437815



##########
File path: cpp-package/example/inference/README.md
##########
@@ -27,7 +27,7 @@ This directory contains following examples. In order to run the examples, ensure
 
 ## [imagenet_inference.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/imagenet_inference.cpp>)
 
-This example demonstrates image classification workflow with pre-trained models using MXNet C++ API. Now this script also supports inference with quantized CNN models generated by IntelĀ® MKL-DNN (see this [quantization flow](https://github.com/apache/incubator-mxnet/blob/master/example/quantization/README.md)). By using C++ API, the latency of most models will be reduced to some extent compared with current Python implementation.
+This example demonstrates image classification workflow with pre-trained models using MXNet C++ API. Now this script also supports inference with quantized CNN models generated by IntelĀ® DNNL (see this [quantization flow](https://github.com/apache/incubator-mxnet/blob/master/example/quantization/README.md)). By using C++ API, the latency of most models will be reduced to some extent compared with current Python implementation.

Review comment:
       No need to have Intel(R) prefix as well, just oneDNN.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] vpirogov commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
vpirogov commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721509254



##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL

Review comment:
       No need for `Intel` there. The full library name is just `oneDNN`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] szha commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
szha commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-937346510


   Vadim's suggestion makes sense.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724254495



##########
File path: src/operator/subgraph/dnnl/dnnl_transformer_valatt_property.h
##########
@@ -227,22 +226,22 @@ class SgMKLDNNTransformerValAttSelector : public SubgraphSelectorV2 {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNTransformerValAttSelector();
+    auto new_selector = SgDNNLTransformerValAttSelector();
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNTransformerValAttProperty : public SubgraphProperty {
+class SgDNNLTransformerValAttProperty : public SubgraphProperty {
  public:
-  SgMKLDNNTransformerValAttProperty() {}
+  SgDNNLTransformerValAttProperty() {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN Transformer optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNTransformerValAttProperty>();
+    static const std::string& name = "DNNL Transformer optimization pass";

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724250908



##########
File path: src/operator/subgraph/dnnl/dnnl_post_quantize_align_scale_property.h
##########
@@ -117,13 +116,13 @@ class SgMKLDNNConcatPostQuantizeSelector : public SubgraphSelectorV2 {
   std::unordered_set<const nnvm::Node*> visit_list_;
 };
 
-class SgMKLDNNPostQuantizeAlignScaleProperty : public SubgraphProperty {
+class SgDNNLPostQuantizeAlignScaleProperty : public SubgraphProperty {
  public:
-  SgMKLDNNPostQuantizeAlignScaleProperty() : SubgraphProperty(kAdjust) {}
+  SgDNNLPostQuantizeAlignScaleProperty() : SubgraphProperty(kAdjust) {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN post-quantization scale alignment optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNPostQuantizeAlignScaleProperty>();
+    static const std::string& name = "DNNL post-quantization scale alignment optimization pass";

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer edited a comment on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer edited a comment on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-937865566


   > Review of the files from the `subgraph` commit. For easy-to-find cases, I have provided only one occurrence.
   
   @PawelGlomski-Intel Thanks for the help! :) 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mozga-intel edited a comment on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mozga-intel edited a comment on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-942383847


   @szha @akarbown Could you please review it and help with the merge? Thanks! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r723108633



##########
File path: docs/python_docs/python/tutorials/index.rst
##########
@@ -84,10 +84,10 @@ Performance
       How to use int8 in your model to boost training speed.
 
    .. card::
-      :title: MKL-DNN
+      :title: DNNL

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-938024095


   Jenkins CI successfully triggered : [centos-gpu, windows-gpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r715641149



##########
File path: include/mxnet/ndarray.h
##########
@@ -858,9 +858,9 @@ class NDArray {
     std::vector<Storage::Handle> aux_handles;
 
 #if MXNET_USE_ONEDNN == 1
-    /*! This is created when data is stored in MKLDNN format.
+    /*! This is created when data is stored in DNNL format.
      */
-    std::shared_ptr<MKLDNNMemory> mkl_mem_;
+    std::shared_ptr<DNNLMemory> mkl_mem_;

Review comment:
       done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721225903



##########
File path: src/operator/subgraph/dnnl/dnnl_transformer.cc
##########
@@ -23,25 +23,24 @@
 #include <utility>
 #include <vector>
 
-#include "./mkldnn_transformer-inl.h"
-
 #include "../../contrib/transformer-inl.h"
 #include "../../quantization/quantization_utils.h"
 #include "../../tensor/elemwise_unary_op.h"
 #include "../common.h"
+#include "./dnnl_transformer-inl.h"

Review comment:
       I applied clang-format, therefore the order is changed. I believe that is a change for the better.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-935996023


   @vpirogov I saw you suggested in few places to rename our variables and operator/function names to onednn*. I named them dnnl* to make code look more consistent as in oneDNN api we are using dnnl prefixes and namespaces everywhere. Is there new api with new names (dnnl->onednn) coming?
   
   @szha What is your take on this? Would you prefer to have the names in mxnet changed from dnnl to onednn (wherever possible) even though we will still have to write dnnl to use oneDNN so there would be a mixture of names?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r723109261



##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL
 =============
 
 .. container:: cards
 
    .. card::
-      :title: MKL-DNN Installation and Verification
-      :link: mkldnn_readme
+      :title: DNNL Installation and Verification
+      :link: dnnl_readme
 
-      A guide on using MKL-DNN with MXNet.
+      A guide on using DNNL with MXNet.

Review comment:
       done

##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL
 =============
 
 .. container:: cards
 
    .. card::
-      :title: MKL-DNN Installation and Verification
-      :link: mkldnn_readme
+      :title: DNNL Installation and Verification
+      :link: dnnl_readme
 
-      A guide on using MKL-DNN with MXNet.
+      A guide on using DNNL with MXNet.
 
    .. card::
-      :title: MKL-DNN Quantization
-      :link: mkldnn_quantization
+      :title: DNNL Quantization

Review comment:
       done

##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL
 =============
 
 .. container:: cards
 
    .. card::
-      :title: MKL-DNN Installation and Verification
-      :link: mkldnn_readme
+      :title: DNNL Installation and Verification
+      :link: dnnl_readme
 
-      A guide on using MKL-DNN with MXNet.
+      A guide on using DNNL with MXNet.
 
    .. card::
-      :title: MKL-DNN Quantization
-      :link: mkldnn_quantization
+      :title: DNNL Quantization
+      :link: dnnl_quantization
 
-      How to perform quantization with MKLDNN
+      How to perform quantization with DNNL

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] anko-intel commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
anko-intel commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r722079044



##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -552,9 +552,9 @@ def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
         raise ValueError('currently only supports single ctx, while received %s' % str(ctx))
     if ctx.device_type != 'cpu':
         raise ValueError(
-            'quantize_model_mkldnn only support Intel cpu platform with MKL-DNN Backend')
+            'quantize_model_dnnl only support Intel cpu platform with DNNL Backend')
 
-    sym = sym.optimize_for(backend='MKLDNN_QUANTIZE')
+    sym = sym.optimize_for(backend='DNNL_QUANTIZE')

Review comment:
       @vpirogov , Originally I also think thought we can move to use oneDNN name everywhere, but from https://oneapi-src.github.io/oneDNN/v2/dev_guide_transition_to_dnnl.html I can read that DNNL name will be still used. For example DNNL_VERBOSE. This way we have little bit strange situation when we describe something as oneDNN and use for it DNNL_something later.  It happens for runtime and for cmake. Are you going to add additional flags and environment name with ONEDNN prefix?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] PawelGlomski-Intel commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
PawelGlomski-Intel commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721221157



##########
File path: src/operator/subgraph/dnnl/dnnl_transformer.cc
##########
@@ -23,25 +23,24 @@
 #include <utility>
 #include <vector>
 
-#include "./mkldnn_transformer-inl.h"
-
 #include "../../contrib/transformer-inl.h"
 #include "../../quantization/quantization_utils.h"
 #include "../../tensor/elemwise_unary_op.h"
 #include "../common.h"
+#include "./dnnl_transformer-inl.h"

Review comment:
       Other than that, LGTM

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer.cc
##########
@@ -23,25 +23,24 @@
 #include <utility>
 #include <vector>
 
-#include "./mkldnn_transformer-inl.h"
-
 #include "../../contrib/transformer-inl.h"
 #include "../../quantization/quantization_utils.h"
 #include "../../tensor/elemwise_unary_op.h"
 #include "../common.h"
+#include "./dnnl_transformer-inl.h"

Review comment:
       Changed order of includes?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] vpirogov commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
vpirogov commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721516530



##########
File path: src/operator/subgraph/dnnl/dnnl_fc.cc
##########
@@ -654,14 +651,14 @@ static bool SgMKLDNNAvoidFCQuantizeInput(const NodeAttrs& attrs,
   return avoid_indexes.count(index_to_check);
 }
 
-NNVM_REGISTER_OP(_sg_mkldnn_fully_connected)
-    .describe(R"code(_sg_mkldnn_fully_connected)code" ADD_FILELINE)
+NNVM_REGISTER_OP(_sg_dnnl_fully_connected)

Review comment:
       `_sg_dnnl_fully_connected` -> `_sg_onednn_fully_connected`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r723108402



##########
File path: cpp-package/example/inference/README.md
##########
@@ -27,7 +27,7 @@ This directory contains following examples. In order to run the examples, ensure
 
 ## [imagenet_inference.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/imagenet_inference.cpp>)
 
-This example demonstrates image classification workflow with pre-trained models using MXNet C++ API. Now this script also supports inference with quantized CNN models generated by IntelĀ® MKL-DNN (see this [quantization flow](https://github.com/apache/incubator-mxnet/blob/master/example/quantization/README.md)). By using C++ API, the latency of most models will be reduced to some extent compared with current Python implementation.
+This example demonstrates image classification workflow with pre-trained models using MXNet C++ API. Now this script also supports inference with quantized CNN models generated by IntelĀ® DNNL (see this [quantization flow](https://github.com/apache/incubator-mxnet/blob/master/example/quantization/README.md)). By using C++ API, the latency of most models will be reduced to some extent compared with current Python implementation.

Review comment:
       Done.

##########
File path: docs/python_docs/python/tutorials/index.rst
##########
@@ -84,10 +84,10 @@ Performance
       How to use int8 in your model to boost training speed.
 
    .. card::
-      :title: MKL-DNN
+      :title: DNNL
       :link: performance/backend/mkldnn/index.html
 
-      How to get the most from your CPU by using Intel's MKL-DNN.
+      How to get the most from your CPU by using Intel's DNNL.

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-942158889


   @mxnet-bot run ci [unix-cpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-939992583


   @mxnet-bot run ci [all]
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mozga-intel commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mozga-intel commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-942383847


   @szha Could you please review and help with the merge? Thanks! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-938636417


   Jenkins CI successfully triggered : [windows-gpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724250058



##########
File path: src/operator/subgraph/dnnl/dnnl_fc_post_quantize_property.h
##########
@@ -146,22 +145,22 @@ class SgMKLDNNFCPostQuantizeSelector : public SubgraphSelectorV2 {
 
   void Reset() override {
     CHECK_GE(matched_list.size(), 1);
-    auto new_selector = SgMKLDNNFCPostQuantizeSelector(disable_all, disable_float_output);
+    auto new_selector = SgDNNLFCPostQuantizeSelector(disable_all, disable_float_output);
     new_selector.Select(*matched_list[0]);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNFCPostQuantizeProperty : public SubgraphProperty {
+class SgDNNLFCPostQuantizeProperty : public SubgraphProperty {
  public:
-  SgMKLDNNFCPostQuantizeProperty() {
+  SgDNNLFCPostQuantizeProperty() {
     disable_fuse_all     = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_QFC_FUSE_ALL", false);
     disable_float_output = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_QFC_FLOAT_OUTPUT", false);
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN FullyConected post-quantization optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNFCPostQuantizeProperty>();
+    static const std::string& name = "DNNL FullyConected post-quantization optimization pass";

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r716732729



##########
File path: src/operator/subgraph/dnnl/dnnl_conv.cc
##########
@@ -168,42 +166,42 @@ void SgMKLDNNConvOperator::Forward(const OpContext& ctx,
     }
   }
   CHECK_EQ(input_size, idx);
-  bool has_bias  = mkldnn_param.with_bn || !conv_param.no_bias;
+  bool has_bias  = dnnl_param.with_bn || !conv_param.no_bias;
   NDArray data   = inputs[in_data];
-  NDArray output = mkldnn_param.with_sum ? inputs[in_sum] : outputs[kOut];
+  NDArray output = dnnl_param.with_sum ? inputs[in_sum] : outputs[kOut];
 
   // Copy inputs[in_sum] into outputs[kOut] in case inplace optimization failed.
-  if (mkldnn_param.with_sum) {
+  if (dnnl_param.with_sum) {
     if (!initialized_) {
-      // TODO(zhennan): Currently, mkldnn fallback mechanism will break inplace option,
+      // TODO(zhennan): Currently, dnnl fallback mechanism will break inplace option,
       // which make check (req[kOut] == kWriteInplace) useless.
-      auto in_mkl_mem  = inputs[in_sum].GetMKLDNNData();
-      auto out_mkl_mem = outputs[kOut].GetMKLDNNData();
+      auto in_mkl_mem  = inputs[in_sum].GetDNNLData();
+      auto out_mkl_mem = outputs[kOut].GetDNNLData();
       if (in_mkl_mem->get_data_handle() == out_mkl_mem->get_data_handle()) {
         inplace_ = true;
       }
     }
     if (!inplace_) {
-      auto in_mkl_mem  = inputs[in_sum].GetMKLDNNData();
-      auto out_mkl_mem = outputs[kOut].GetMKLDNNData();
+      auto in_mkl_mem  = inputs[in_sum].GetDNNLData();
+      auto out_mkl_mem = outputs[kOut].GetDNNLData();
       if (outputs[kOut].dtype() == mshadow::kInt32) {
         const auto& mem_desc  = in_mkl_mem->get_desc();
-        const auto this_dtype = get_mkldnn_type(mshadow::kInt32);
+        const auto this_dtype = get_dnnl_type(mshadow::kInt32);
         auto omd              = mem_desc;
-        omd.data.data_type    = static_cast<mkldnn_data_type_t>(this_dtype);
-        mkldnn_mem_ptr tmp_mem(new mkldnn::memory(
-            omd, CpuEngine::Get()->get_engine(), out_mkl_mem->get_data_handle()));
-        MKLDNNStream::Get()->RegisterMem(tmp_mem);
-        MKLDNNStream::Get()->RegisterPrimArgs(
-            mkldnn::reorder(*in_mkl_mem, *tmp_mem),
-            {{MKLDNN_ARG_FROM, *in_mkl_mem}, {MKLDNN_ARG_TO, *tmp_mem}});
+        omd.data.data_type    = static_cast<dnnl_data_type_t>(this_dtype);
+        dnnl_mem_ptr tmp_mem(
+            new dnnl::memory(omd, CpuEngine::Get()->get_engine(), out_mkl_mem->get_data_handle()));

Review comment:
       out_dnnl_mem




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r715542462



##########
File path: include/mxnet/ndarray.h
##########
@@ -858,9 +858,9 @@ class NDArray {
     std::vector<Storage::Handle> aux_handles;
 
 #if MXNET_USE_ONEDNN == 1
-    /*! This is created when data is stored in MKLDNN format.
+    /*! This is created when data is stored in DNNL format.
      */
-    std::shared_ptr<MKLDNNMemory> mkl_mem_;
+    std::shared_ptr<DNNLMemory> mkl_mem_;

Review comment:
       dnnl




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-937865566


   > Review of the files from the `subgraph` commit. For easy-to-find cases, I have provided only one occurrence.
   
   Thanks for the help! :) 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724244538



##########
File path: src/operator/subgraph/dnnl/dnnl_fc.cc
##########
@@ -654,14 +651,14 @@ static bool SgMKLDNNAvoidFCQuantizeInput(const NodeAttrs& attrs,
   return avoid_indexes.count(index_to_check);
 }
 
-NNVM_REGISTER_OP(_sg_mkldnn_fully_connected)
-    .describe(R"code(_sg_mkldnn_fully_connected)code" ADD_FILELINE)
+NNVM_REGISTER_OP(_sg_dnnl_fully_connected)

Review comment:
       > `_sg_dnnl_fully_connected` -> `_sg_onednn_fully_connected`
   
   Done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724250483



##########
File path: src/operator/subgraph/dnnl/dnnl_fc_property.h
##########
@@ -156,21 +155,21 @@ class SgMKLDNNFCSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNFCSelector(disable_fc_eltwise_, quantized_);
+    auto new_selector = SgDNNLFCSelector(disable_fc_eltwise_, quantized_);
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNFCProperty : public SubgraphProperty {
+class SgDNNLFCProperty : public SubgraphProperty {
  public:
-  SgMKLDNNFCProperty() {
+  SgDNNLFCProperty() {
     disable_fc_eltwise_ = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_FC_ELTWISE", false);
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN FullyConnected optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNFCProperty>();
+    static const std::string& name = "DNNL FullyConnected optimization pass";

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] PawelGlomski-Intel commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
PawelGlomski-Intel commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724204583



##########
File path: src/operator/subgraph/dnnl/dnnl_bn_relu_property.h
##########
@@ -106,7 +105,7 @@ class SgMKLDNNBNReLUProperty : public SubgraphProperty {
     nnvm::ObjectPtr n = nnvm::Node::Create();
 
     std::ostringstream node_name;
-    node_name << "sg_mkldnn_batch_norm_relu_" << std::to_string(subgraph_id);
+    node_name << "sg_dnnl_batch_norm_relu_" << std::to_string(subgraph_id);

Review comment:
       ```suggestion
       node_name << "sg_onednn_batch_norm_relu_" << std::to_string(subgraph_id);
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_bn_relu_property.h
##########
@@ -91,8 +90,8 @@ class SgMKLDNNBNReLUProperty : public SubgraphProperty {
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN BN + ReLU optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNBNReLUProperty>();
+    static const std::string& name = "DNNL BN + ReLU optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN BN + ReLU optimization pass";
   ```

##########
File path: python/mxnet/amp/lists/symbol_fp16.py
##########
@@ -611,10 +611,10 @@
 
 if Features().is_enabled('ONEDNN'):
     FP32_FUNCS.extend([
-        '_sg_mkldnn_conv',
-        '_sg_mkldnn_fully_connected',
-        '_sg_mkldnn_selfatt_qk',
-        '_sg_mkldnn_selfatt_valatt',
+        '_sg_dnnl_conv',
+        '_sg_dnnl_fully_connected',
+        '_sg_dnnl_selfatt_qk',
+        '_sg_dnnl_selfatt_valatt',

Review comment:
       ```suggestion
           '_sg_onednn_conv',
           '_sg_onednn_fully_connected',
           '_sg_onednn_selfatt_qk',
           '_sg_onednn_selfatt_valatt',
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer.cc
##########
@@ -490,7 +489,7 @@ class MKLDNNSelfAttValAttOp {
                 const std::vector<NDArray>& inputs,
                 const std::vector<OpReqType>& req,
                 const std::vector<NDArray>& outputs) {
-    LOG(FATAL) << "Not implemented: subgraph mkldnn self attention val only supports "
+    LOG(FATAL) << "Not implemented: subgraph dnnl self attention val only supports "

Review comment:
       ```suggestion
       LOG(FATAL) << "Not implemented: subgraph oneDNN self attention val only supports "
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_post_quantize_align_scale_property.h
##########
@@ -117,13 +116,13 @@ class SgMKLDNNConcatPostQuantizeSelector : public SubgraphSelectorV2 {
   std::unordered_set<const nnvm::Node*> visit_list_;
 };
 
-class SgMKLDNNPostQuantizeAlignScaleProperty : public SubgraphProperty {
+class SgDNNLPostQuantizeAlignScaleProperty : public SubgraphProperty {
  public:
-  SgMKLDNNPostQuantizeAlignScaleProperty() : SubgraphProperty(kAdjust) {}
+  SgDNNLPostQuantizeAlignScaleProperty() : SubgraphProperty(kAdjust) {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN post-quantization scale alignment optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNPostQuantizeAlignScaleProperty>();
+    static const std::string& name = "DNNL post-quantization scale alignment optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN post-quantization scale alignment optimization pass";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_subgraph_property.cc
##########
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#if MXNET_USE_ONEDNN == 1
+
+#include "dnnl_bn_relu_property.h"
+#include "dnnl_conv_property.h"
+#include "dnnl_elemwisemul_post_quantize_property.h"
+#include "dnnl_fc_post_quantize_property.h"
+#include "dnnl_fc_property.h"
+#include "dnnl_post_quantize_align_scale_property.h"
+#include "dnnl_post_quantize_property.h"
+#include "dnnl_transformer_post_quantize_property.h"
+#include "dnnl_transformer_qk_property.h"
+#include "dnnl_transformer_valatt_property.h"
+
+namespace mxnet {
+namespace op {
+
+MXNET_REGISTER_SUBGRAPH_BACKEND(DNNL)

Review comment:
       ```suggestion
   MXNET_REGISTER_SUBGRAPH_BACKEND(ONEDNN)
   ```
   This one might be harder to change

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer_qk_property.h
##########
@@ -153,22 +152,22 @@ class SgMKLDNNTransformerQKSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNTransformerQKSelector();
+    auto new_selector = SgDNNLTransformerQKSelector();
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNTransformerQKProperty : public SubgraphProperty {
+class SgDNNLTransformerQKProperty : public SubgraphProperty {
  public:
-  SgMKLDNNTransformerQKProperty() {}
+  SgDNNLTransformerQKProperty() {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN Transformer optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNTransformerQKProperty>();
+    static const std::string& name = "DNNL Transformer optimization pass";
+    auto property                  = std::make_shared<SgDNNLTransformerQKProperty>();
     property->SetAttr<std::string>("property_name", name);
     property->SetAttr<bool>("inference_only", true);
-    if (dmlc::GetEnv("MXNET_DISABLE_MKLDNN_TRANSFORMER_OPT", 0)) {
+    if (dmlc::GetEnv("MXNET_DISABLE_DNNL_TRANSFORMER_OPT", 0)) {

Review comment:
       ```suggestion
       if (dmlc::GetEnv("MXNET_DISABLE_ONEDNN_TRANSFORMER_OPT", 0)) {
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_fc_post_quantize_property.h
##########
@@ -146,22 +145,22 @@ class SgMKLDNNFCPostQuantizeSelector : public SubgraphSelectorV2 {
 
   void Reset() override {
     CHECK_GE(matched_list.size(), 1);
-    auto new_selector = SgMKLDNNFCPostQuantizeSelector(disable_all, disable_float_output);
+    auto new_selector = SgDNNLFCPostQuantizeSelector(disable_all, disable_float_output);
     new_selector.Select(*matched_list[0]);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNFCPostQuantizeProperty : public SubgraphProperty {
+class SgDNNLFCPostQuantizeProperty : public SubgraphProperty {
  public:
-  SgMKLDNNFCPostQuantizeProperty() {
+  SgDNNLFCPostQuantizeProperty() {
     disable_fuse_all     = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_QFC_FUSE_ALL", false);
     disable_float_output = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_QFC_FLOAT_OUTPUT", false);
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN FullyConected post-quantization optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNFCPostQuantizeProperty>();
+    static const std::string& name = "DNNL FullyConected post-quantization optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN FullyConected post-quantization optimization pass";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_post_quantize_property.h
##########
@@ -112,22 +111,22 @@ class SgMKLDNNPostQuantizeSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list.size(), 1);
-    auto new_selector = SgMKLDNNPostQuantizeSelector();
+    auto new_selector = SgDNNLPostQuantizeSelector();
     new_selector.Select(*matched_list[0]);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNPostQuantizeProperty : public SubgraphProperty {
+class SgDNNLPostQuantizeProperty : public SubgraphProperty {
  public:
-  SgMKLDNNPostQuantizeProperty() {
-    support_requantize_fusion_op_name.insert("_sg_mkldnn_conv");
+  SgDNNLPostQuantizeProperty() {
+    support_requantize_fusion_op_name.insert("_sg_dnnl_conv");
     support_requantize_fusion_op_name.insert("_contrib_quantized_elemwise_add");
     support_requantize_fusion_op_name.insert("_contrib_quantized_npi_add");
   }
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN post-quantization optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNPostQuantizeProperty>();
+    static const std::string& name = "DNNL post-quantization optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN post-quantization optimization pass";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer.cc
##########
@@ -123,7 +122,7 @@ class SgMKLDNNSelfAttQKOp {
                 const std::vector<NDArray>& inputs,
                 const std::vector<OpReqType>& req,
                 const std::vector<NDArray>& outputs) {
-    LOG(FATAL) << "Not implemented: subgraph mkldnn self attention qk only supports "
+    LOG(FATAL) << "Not implemented: subgraph dnnl self attention qk only supports "

Review comment:
       ```suggestion
       LOG(FATAL) << "Not implemented: subgraph oneDNN self attention qk only supports "
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_elemwisemul_post_quantize_property.h
##########
@@ -161,7 +160,7 @@ class ElemwiseMulPostQuantizeProperty : public SubgraphProperty {
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN EltwiseMul post-quantization optimization pass";
+    static const std::string& name = "DNNL EltwiseMul post-quantization optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN EltwiseMul post-quantization optimization pass";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_conv_property.h
##########
@@ -204,7 +199,7 @@ class SgMKLDNNConvProperty : public SubgraphProperty {
     nnvm::Symbol new_sym;
     new_sym.outputs.emplace_back(last_node);
     std::ostringstream node_name;
-    node_name << "sg_mkldnn_";
+    node_name << "sg_dnnl_";

Review comment:
       ```suggestion
       node_name << "sg_onednn_";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_conv_property.h
##########
@@ -170,25 +165,25 @@ class SgMKLDNNConvSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNConvSelector(
+    auto new_selector = SgDNNLConvSelector(
         disable_all_, disable_conv_bn_, disable_conv_act_, disable_conv_sum_, quantize_);
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNConvProperty : public SubgraphProperty {
+class SgDNNLConvProperty : public SubgraphProperty {
  public:
-  SgMKLDNNConvProperty() {
+  SgDNNLConvProperty() {
     disable_conv_bn_  = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_CONV_BN", 0);
     disable_conv_act_ = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_CONV_RELU", 0);
     disable_conv_sum_ = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_CONV_SUM", 0);
 
     disable_all_ = disable_conv_bn_ && disable_conv_act_ && disable_conv_sum_;
   }
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN convolution optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNConvProperty>();
+    static const std::string& name = "DNNL convolution optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN convolution optimization pass";
   ```

##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -527,13 +527,13 @@ def quantize_model(sym, arg_params, aux_params, data_names=('data',),
 
     return qsym, qarg_params, aux_params
 
-def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
-                          ctx=cpu(), excluded_sym_names=None, excluded_op_names=None,
-                          calib_mode='entropy', calib_data=None, num_calib_batches=None,
-                          quantized_dtype='int8', quantize_mode='smart',
-                          quantize_granularity='tensor-wise', logger=None):
+def quantize_model_dnnl(sym, arg_params, aux_params, data_names=('data',),

Review comment:
       ```suggestion
   def quantize_model_onednn(sym, arg_params, aux_params, data_names=('data',),
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer_qk_property.h
##########
@@ -153,22 +152,22 @@ class SgMKLDNNTransformerQKSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNTransformerQKSelector();
+    auto new_selector = SgDNNLTransformerQKSelector();
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNTransformerQKProperty : public SubgraphProperty {
+class SgDNNLTransformerQKProperty : public SubgraphProperty {
  public:
-  SgMKLDNNTransformerQKProperty() {}
+  SgDNNLTransformerQKProperty() {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN Transformer optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNTransformerQKProperty>();
+    static const std::string& name = "DNNL Transformer optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN Transformer optimization pass";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_fc_property.h
##########
@@ -156,21 +155,21 @@ class SgMKLDNNFCSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNFCSelector(disable_fc_eltwise_, quantized_);
+    auto new_selector = SgDNNLFCSelector(disable_fc_eltwise_, quantized_);
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNFCProperty : public SubgraphProperty {
+class SgDNNLFCProperty : public SubgraphProperty {
  public:
-  SgMKLDNNFCProperty() {
+  SgDNNLFCProperty() {
     disable_fc_eltwise_ = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_FC_ELTWISE", false);
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN FullyConnected optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNFCProperty>();
+    static const std::string& name = "DNNL FullyConnected optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN FullyConnected optimization pass";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer_valatt_property.h
##########
@@ -227,22 +226,22 @@ class SgMKLDNNTransformerValAttSelector : public SubgraphSelectorV2 {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNTransformerValAttSelector();
+    auto new_selector = SgDNNLTransformerValAttSelector();
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNTransformerValAttProperty : public SubgraphProperty {
+class SgDNNLTransformerValAttProperty : public SubgraphProperty {
  public:
-  SgMKLDNNTransformerValAttProperty() {}
+  SgDNNLTransformerValAttProperty() {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN Transformer optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNTransformerValAttProperty>();
+    static const std::string& name = "DNNL Transformer optimization pass";

Review comment:
       ```suggestion
       static const std::string& name = "oneDNN Transformer optimization pass";
   ```

##########
File path: src/operator/subgraph/dnnl/dnnl_conv.cc
##########
@@ -686,23 +682,22 @@ static bool SgMKLDNNConvOpStorageType(const nnvm::NodeAttrs& attrs,
   }
 }
 
-std::vector<std::pair<int, int>> SgMKLDNNConvInplaceOption(const NodeAttrs& attrs) {
-  auto const& param = nnvm::get<MKLDNNConvFusionParam>(attrs.parsed);
-  if (param.full_conv_param.mkldnn_param.with_sum &&
-      !param.full_conv_param.mkldnn_param.dedup_sum) {
+std::vector<std::pair<int, int>> SgDNNLConvInplaceOption(const NodeAttrs& attrs) {
+  auto const& param = nnvm::get<DNNLConvFusionParam>(attrs.parsed);
+  if (param.full_conv_param.dnnl_param.with_sum && !param.full_conv_param.dnnl_param.dedup_sum) {
     return std::vector<std::pair<int, int>>{{GetInSumIndex(param), 0}};
   } else {
     return std::vector<std::pair<int, int>>();
   }
 }
 
-nnvm::ObjectPtr SgMKLDNNConvQuantizedOp(const NodeAttrs& attrs) {
-  auto const& param    = nnvm::get<MKLDNNConvFusionParam>(attrs.parsed);
+nnvm::ObjectPtr SgDNNLConvQuantizedOp(const NodeAttrs& attrs) {
+  auto const& param    = nnvm::get<DNNLConvFusionParam>(attrs.parsed);
   nnvm::ObjectPtr node = nnvm::Node::Create();
-  node->attrs.op       = Op::Get("_sg_mkldnn_conv");
+  node->attrs.op       = Op::Get("_sg_dnnl_conv");
   const int k_ndims    = param.full_conv_param.conv_param.kernel.ndim();
   CHECK(k_ndims == 2U || k_ndims == 3U)
-      << "Quantized Convolution of MKL-DNN supports 2D/3D kernel currently."
+      << "Quantized Convolution of DNNL supports 2D/3D kernel currently."

Review comment:
       ```suggestion
         << "Quantized Convolution of oneDNN supports 2D/3D kernel currently."
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-938024042


   @mxnet-bot run ci [centos-gpu, windows-gpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r723109157



##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL
 =============
 
 .. container:: cards
 
    .. card::
-      :title: MKL-DNN Installation and Verification
-      :link: mkldnn_readme
+      :title: DNNL Installation and Verification

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] vpirogov commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
vpirogov commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721509254



##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL

Review comment:
       No need for `Intel` there. The full library name is just 'oneDNN'




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] PawelGlomski-Intel commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
PawelGlomski-Intel commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721230623



##########
File path: src/operator/subgraph/dnnl/dnnl_transformer.cc
##########
@@ -23,25 +23,24 @@
 #include <utility>
 #include <vector>
 
-#include "./mkldnn_transformer-inl.h"
-
 #include "../../contrib/transformer-inl.h"
 #include "../../quantization/quantization_utils.h"
 #include "../../tensor/elemwise_unary_op.h"
 #include "../common.h"
+#include "./dnnl_transformer-inl.h"

Review comment:
       I believe our new configuration of clang-format shouldn't reorder includes.
   @mozga-intel is this still our current clang-format configuration? https://github.com/apache/incubator-mxnet/pull/20433/files




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-932175257


   @mxnet-bot run ci [centos-cpu, unix-gpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721239012



##########
File path: cd/python/pypi/pypi_package.sh
##########
@@ -22,11 +22,10 @@ set -ex
 export mxnet_variant=${1:?"Please specify the mxnet variant"}
 
 # Due to this PR: https://github.com/apache/incubator-mxnet/pull/14899
-# The setup.py expects that mkldnn_version.h be present in
+# The setup.py expects that dnnl_version.h be present in
 # mxnet-build/3rdparty/onednn/build/install/include
 # The artifact repository stores this file in the dependencies
 # and CD unpacks it to a directory called cd_misc
-# Nov. 2019 Update: With v1.1, MKL-DNN is renaming to DNNL. Hence changing the prefix of file name.
 if [ -f "cd_misc/dnnl_version.h" ]; then
   mkdir -p 3rdparty/onednn/include/oneapi/dnnl
   cp cd_misc/dnnl_version.h 3rdparty/onednn/include/oneapi/dnnl/.

Review comment:
       I do not understand.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-935948722


   > Please use `git mv` to move or rename a file, a directory (an example: [link](https://github.com/apache/incubator-mxnet/pull/20606/files#diff-9c23a9af8ecce528f160528e8e2079f5e3b77f33194de47af7c63875fb85ead8))
   
   @mozga-intel I did this but, I believe due to most of the lines of some of the files being changed, git-hub was automatically marking it as new rewritten file. I moved renaming/modifying troublesome files to separate commits to make reviewing easier.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-933360484


   @anko-intel I will apply other changes after all reviews are done.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r718220715



##########
File path: src/operator/subgraph/dnnl/dnnl_conv.cc
##########
@@ -168,42 +166,42 @@ void SgMKLDNNConvOperator::Forward(const OpContext& ctx,
     }
   }
   CHECK_EQ(input_size, idx);
-  bool has_bias  = mkldnn_param.with_bn || !conv_param.no_bias;
+  bool has_bias  = dnnl_param.with_bn || !conv_param.no_bias;
   NDArray data   = inputs[in_data];
-  NDArray output = mkldnn_param.with_sum ? inputs[in_sum] : outputs[kOut];
+  NDArray output = dnnl_param.with_sum ? inputs[in_sum] : outputs[kOut];
 
   // Copy inputs[in_sum] into outputs[kOut] in case inplace optimization failed.
-  if (mkldnn_param.with_sum) {
+  if (dnnl_param.with_sum) {
     if (!initialized_) {
-      // TODO(zhennan): Currently, mkldnn fallback mechanism will break inplace option,
+      // TODO(zhennan): Currently, dnnl fallback mechanism will break inplace option,
       // which make check (req[kOut] == kWriteInplace) useless.
-      auto in_mkl_mem  = inputs[in_sum].GetMKLDNNData();
-      auto out_mkl_mem = outputs[kOut].GetMKLDNNData();
+      auto in_mkl_mem  = inputs[in_sum].GetDNNLData();
+      auto out_mkl_mem = outputs[kOut].GetDNNLData();
       if (in_mkl_mem->get_data_handle() == out_mkl_mem->get_data_handle()) {
         inplace_ = true;
       }
     }
     if (!inplace_) {
-      auto in_mkl_mem  = inputs[in_sum].GetMKLDNNData();
-      auto out_mkl_mem = outputs[kOut].GetMKLDNNData();
+      auto in_mkl_mem  = inputs[in_sum].GetDNNLData();
+      auto out_mkl_mem = outputs[kOut].GetDNNLData();
       if (outputs[kOut].dtype() == mshadow::kInt32) {
         const auto& mem_desc  = in_mkl_mem->get_desc();
-        const auto this_dtype = get_mkldnn_type(mshadow::kInt32);
+        const auto this_dtype = get_dnnl_type(mshadow::kInt32);
         auto omd              = mem_desc;
-        omd.data.data_type    = static_cast<mkldnn_data_type_t>(this_dtype);
-        mkldnn_mem_ptr tmp_mem(new mkldnn::memory(
-            omd, CpuEngine::Get()->get_engine(), out_mkl_mem->get_data_handle()));
-        MKLDNNStream::Get()->RegisterMem(tmp_mem);
-        MKLDNNStream::Get()->RegisterPrimArgs(
-            mkldnn::reorder(*in_mkl_mem, *tmp_mem),
-            {{MKLDNN_ARG_FROM, *in_mkl_mem}, {MKLDNN_ARG_TO, *tmp_mem}});
+        omd.data.data_type    = static_cast<dnnl_data_type_t>(this_dtype);
+        dnnl_mem_ptr tmp_mem(
+            new dnnl::memory(omd, CpuEngine::Get()->get_engine(), out_mkl_mem->get_data_handle()));

Review comment:
       done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-931475316


   @mxnet-bot ci run [miscellaneous]
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] anko-intel commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
anko-intel commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721203019



##########
File path: cpp-package/example/inference/README.md
##########
@@ -27,7 +27,7 @@ This directory contains following examples. In order to run the examples, ensure
 
 ## [imagenet_inference.cpp](<https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/inference/imagenet_inference.cpp>)
 
-This example demonstrates image classification workflow with pre-trained models using MXNet C++ API. Now this script also supports inference with quantized CNN models generated by IntelĀ® MKL-DNN (see this [quantization flow](https://github.com/apache/incubator-mxnet/blob/master/example/quantization/README.md)). By using C++ API, the latency of most models will be reduced to some extent compared with current Python implementation.
+This example demonstrates image classification workflow with pre-trained models using MXNet C++ API. Now this script also supports inference with quantized CNN models generated by IntelĀ® DNNL (see this [quantization flow](https://github.com/apache/incubator-mxnet/blob/master/example/quantization/README.md)). By using C++ API, the latency of most models will be reduced to some extent compared with current Python implementation.

Review comment:
       oneDNN as official library name?

##########
File path: docs/python_docs/python/tutorials/index.rst
##########
@@ -84,10 +84,10 @@ Performance
       How to use int8 in your model to boost training speed.
 
    .. card::
-      :title: MKL-DNN
+      :title: DNNL

Review comment:
       oneDNN

##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_readme.md
##########
@@ -208,9 +208,9 @@ o = exe.outputs[0]
 t = o.asnumpy()
 ```
 
-More detailed debugging and profiling information can be logged by setting the environment variable 'MKLDNN_VERBOSE':
+More detailed debugging and profiling information can be logged by setting the environment variable 'DNNL_VERBOSE':
 ```
-export MKLDNN_VERBOSE=1
+export DNNL_VERBOSE=1
 ```
 For example, by running above code snippet, the following debugging logs providing more insights on ONEDNN primitives `convolution` and `reorder`. That includes: Memory layout, infer shape and the time cost of primitive execution.

Review comment:
       ```suggestion
   For example, by running above code snippet, the following debugging logs providing more insights on oneDNN primitives `convolution` and `reorder`. That includes: Memory layout, infer shape and the time cost of primitive execution.
   ```

##########
File path: docs/python_docs/python/tutorials/index.rst
##########
@@ -84,10 +84,10 @@ Performance
       How to use int8 in your model to boost training speed.
 
    .. card::
-      :title: MKL-DNN
+      :title: DNNL
       :link: performance/backend/mkldnn/index.html
 
-      How to get the most from your CPU by using Intel's MKL-DNN.
+      How to get the most from your CPU by using Intel's DNNL.

Review comment:
       oneDNN

##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL
 =============
 
 .. container:: cards
 
    .. card::
-      :title: MKL-DNN Installation and Verification
-      :link: mkldnn_readme
+      :title: DNNL Installation and Verification
+      :link: dnnl_readme
 
-      A guide on using MKL-DNN with MXNet.
+      A guide on using DNNL with MXNet.
 
    .. card::
-      :title: MKL-DNN Quantization
-      :link: mkldnn_quantization
+      :title: DNNL Quantization
+      :link: dnnl_quantization
 
-      How to perform quantization with MKLDNN
+      How to perform quantization with DNNL

Review comment:
       ```suggestion
         How to perform quantization with oneDNN
   ```

##########
File path: docs/python_docs/python/tutorials/performance/index.rst
##########
@@ -76,10 +76,10 @@ Accelerated Backend
    ..
       TBD Content
       .. card::
-         :title: MKL-DNN
+         :title: DNNL

Review comment:
       ```suggestion
            :title: oneDNN
   ```

##########
File path: docs/static_site/src/pages/api/faq/env_var.md
##########
@@ -375,7 +375,7 @@ If ctypes is used, it must be `mxnet._ctypes.ndarray.NDArrayBase`.
   - This variable controls how many CuDNN dropout state resources to create for each GPU context for use in operator.
 
 * MXNET_SUBGRAPH_BACKEND
-  - Values: String ```(default="MKLDNN")``` if ONEDNN is avaliable, otherwise ```(default="")```
+  - Values: String ```(default="DNNL")``` if ONEDNN is avaliable, otherwise ```(default="")```

Review comment:
       ```suggestion
     - Values: String ```(default="DNNL")``` if oneDNN is available, otherwise ```(default="")```
   ```

##########
File path: docs/static_site/src/pages/api/faq/env_var.md
##########
@@ -375,7 +375,7 @@ If ctypes is used, it must be `mxnet._ctypes.ndarray.NDArrayBase`.
   - This variable controls how many CuDNN dropout state resources to create for each GPU context for use in operator.
 
 * MXNET_SUBGRAPH_BACKEND
-  - Values: String ```(default="MKLDNN")``` if ONEDNN is avaliable, otherwise ```(default="")```
+  - Values: String ```(default="DNNL")``` if ONEDNN is avaliable, otherwise ```(default="")```
   - This variable controls the subgraph partitioning in MXNet.
   - This variable is used to perform ONEDNN FP32 operator fusion and quantization. Please refer to the [ONEDNN operator list](https://github.com/apache/incubator-mxnet/blob/v1.5.x/docs/tutorials/mkldnn/operator_list.md) for how this variable is used and the list of fusion passes.

Review comment:
       ```suggestion
     - This variable is used to perform oneDNN FP32 operator fusion and quantization. Please refer to the [ONEDNN operator list](https://github.com/apache/incubator-mxnet/blob/v1.5.x/docs/tutorials/dnnl/operator_list.md) for how this variable is used and the list of fusion passes.
   ```
   please double check directory name

##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL
 =============
 
 .. container:: cards
 
    .. card::
-      :title: MKL-DNN Installation and Verification
-      :link: mkldnn_readme
+      :title: DNNL Installation and Verification
+      :link: dnnl_readme
 
-      A guide on using MKL-DNN with MXNet.
+      A guide on using DNNL with MXNet.

Review comment:
       ```suggestion
         A guide on using oneDNN with MXNet.
   ```

##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL
 =============
 
 .. container:: cards
 
    .. card::
-      :title: MKL-DNN Installation and Verification
-      :link: mkldnn_readme
+      :title: DNNL Installation and Verification

Review comment:
       ```suggestion
         :title: oneDNN Installation and Verification
   ```

##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL

Review comment:
       ```suggestion
   Intel oneDNN
   ```

##########
File path: docs/python_docs/python/tutorials/performance/backend/index.rst
##########
@@ -22,10 +22,10 @@ The following tutorials will help you learn how to use backend tools to boost pe
 .. container:: cards
 
   .. card::
-     :title: MKL-DNN
-     :link: mkldnn/index.html
+     :title: DNNL

Review comment:
       ```suggestion
        :title: oneDNN
   ```

##########
File path: docs/python_docs/python/tutorials/performance/backend/dnnl/index.rst
##########
@@ -15,22 +15,22 @@
    specific language governing permissions and limitations
    under the License.
 
-Intel MKL-DNN
+Intel DNNL
 =============
 
 .. container:: cards
 
    .. card::
-      :title: MKL-DNN Installation and Verification
-      :link: mkldnn_readme
+      :title: DNNL Installation and Verification
+      :link: dnnl_readme
 
-      A guide on using MKL-DNN with MXNet.
+      A guide on using DNNL with MXNet.
 
    .. card::
-      :title: MKL-DNN Quantization
-      :link: mkldnn_quantization
+      :title: DNNL Quantization

Review comment:
       ```suggestion
         :title: oneDNN Quantization
   ```

##########
File path: cd/python/pypi/pypi_package.sh
##########
@@ -22,11 +22,10 @@ set -ex
 export mxnet_variant=${1:?"Please specify the mxnet variant"}
 
 # Due to this PR: https://github.com/apache/incubator-mxnet/pull/14899
-# The setup.py expects that mkldnn_version.h be present in
+# The setup.py expects that dnnl_version.h be present in
 # mxnet-build/3rdparty/onednn/build/install/include
 # The artifact repository stores this file in the dependencies
 # and CD unpacks it to a directory called cd_misc
-# Nov. 2019 Update: With v1.1, MKL-DNN is renaming to DNNL. Hence changing the prefix of file name.
 if [ -f "cd_misc/dnnl_version.h" ]; then
   mkdir -p 3rdparty/onednn/include/oneapi/dnnl
   cp cd_misc/dnnl_version.h 3rdparty/onednn/include/oneapi/dnnl/.

Review comment:
       check directory here onednn dnnl ?

##########
File path: docs/python_docs/python/tutorials/performance/index.rst
##########
@@ -76,10 +76,10 @@ Accelerated Backend
    ..
       TBD Content
       .. card::
-         :title: MKL-DNN
+         :title: DNNL
          :link: backend/mkldnn/mkldnn_readme
 
-         How to get the most from your CPU by using Intel's MKL-DNN.
+         How to get the most from your CPU by using Intel's DNNL.

Review comment:
       ```suggestion
            How to get the most from your CPU by using Intel's oneDNN.
   ```

##########
File path: README.md
##########
@@ -88,7 +88,7 @@ What's New
 
 ### Ecosystem News
 
-* [ONEDNN for Faster CPU Performance](docs/python_docs/python/tutorials/performance/backend/mkldnn/mkldnn_readme.md)
+* [ONEDNN for Faster CPU Performance](docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_readme.md)

Review comment:
       ```suggestion
   * [oneDNN for Faster CPU Performance](docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_readme.md)
   ```

##########
File path: ci/docker/runtime_functions.sh
##########
@@ -763,7 +763,7 @@ cd_unittest_ubuntu() {
     fi
 
     if [[ ${mxnet_variant} = *mkl ]]; then

Review comment:
       "*mkl" seems to be not valid from mxnet1.8  or 1.7?  but I am not sure if in this context also

##########
File path: docs/python_docs/python/tutorials/performance/backend/profiler.md
##########
@@ -211,11 +211,11 @@ Let's zoom in to check the time taken by operators
 The above picture visualizes the sequence in which the operators were executed and the time taken by each operator.
 
 ### Profiling ONEDNN Operators
-Reagrding ONEDNN operators, the library has already provided the internal profiling tool. Firstly, you need set `MKLDNN_VERBOSE=1` to enable internal profiler.
+Reagrding ONEDNN operators, the library has already provided the internal profiling tool. Firstly, you need set `DNNL_VERBOSE=1` to enable internal profiler.
 
-`$ MKLDNN_VERBOSE=1 python my_script.py > mkldnn_verbose.log`
+`$ DNNL_VERBOSE=1 python my_script.py > dnnl_verbose.log`
 
-Now, the detailed profiling insights of each ONEDNN prmitive are saved into `mkldnn_verbose.log` (like below).
+Now, the detailed profiling insights of each ONEDNN prmitive are saved into `dnnl_verbose.log` (like below).

Review comment:
       ```suggestion
   Now, the detailed profiling insights of each oneDNN primitive are saved into `dnnl_verbose.log` (like below).
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] szha commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
szha commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-933073996


   Is this change ready?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r723112495



##########
File path: src/operator/subgraph/dnnl/dnnl_fc.cc
##########
@@ -654,14 +651,14 @@ static bool SgMKLDNNAvoidFCQuantizeInput(const NodeAttrs& attrs,
   return avoid_indexes.count(index_to_check);
 }
 
-NNVM_REGISTER_OP(_sg_mkldnn_fully_connected)
-    .describe(R"code(_sg_mkldnn_fully_connected)code" ADD_FILELINE)
+NNVM_REGISTER_OP(_sg_dnnl_fully_connected)

Review comment:
       I will create separate JIRA ticket for that.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] vpirogov commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
vpirogov commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721513839



##########
File path: docs/static_site/src/pages/api/faq/env_var.md
##########
@@ -375,7 +375,7 @@ If ctypes is used, it must be `mxnet._ctypes.ndarray.NDArrayBase`.
   - This variable controls how many CuDNN dropout state resources to create for each GPU context for use in operator.
 
 * MXNET_SUBGRAPH_BACKEND
-  - Values: String ```(default="MKLDNN")``` if ONEDNN is avaliable, otherwise ```(default="")```
+  - Values: String ```(default="DNNL")``` if ONEDNN is avaliable, otherwise ```(default="")```

Review comment:
       I would suggest to use `ONEDNN` for the backend name. There's no need to introduce naming with `DNNL` unless it was already there.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] anko-intel commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
anko-intel commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r721266661



##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -527,13 +527,13 @@ def quantize_model(sym, arg_params, aux_params, data_names=('data',),
 
     return qsym, qarg_params, aux_params
 
-def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
-                          ctx=cpu(), excluded_sym_names=None, excluded_op_names=None,
-                          calib_mode='entropy', calib_data=None, num_calib_batches=None,
-                          quantized_dtype='int8', quantize_mode='smart',
-                          quantize_granularity='tensor-wise', logger=None):
+def quantize_model_dnnl(sym, arg_params, aux_params, data_names=('data',),
+                        ctx=cpu(), excluded_sym_names=None, excluded_op_names=None,
+                        calib_mode='entropy', calib_data=None, num_calib_batches=None,
+                        quantized_dtype='int8', quantize_mode='smart',
+                        quantize_granularity='tensor-wise', logger=None):
     """User-level API for generating a fusion + quantized model from a FP32 model
-    w/ or w/o calibration with Intel MKL-DNN.
+    w/ or w/o calibration with Intel DNNL.

Review comment:
       @PawelGlomski-Intel - I think only names in documentation/description should be oneDNN (but I am not stick to it) and other names should be DNNL as OneDNN internally also use DNNL in runtime environment names




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-938024095


   Jenkins CI successfully triggered : [centos-gpu, windows-gpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] akarbown merged pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
akarbown merged pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-942158948


   Jenkins CI successfully triggered : [unix-cpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-938636334


   @mxnet-bot run ci [windows-gpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#discussion_r724212020



##########
File path: docs/static_site/src/pages/api/faq/env_var.md
##########
@@ -375,7 +375,7 @@ If ctypes is used, it must be `mxnet._ctypes.ndarray.NDArrayBase`.
   - This variable controls how many CuDNN dropout state resources to create for each GPU context for use in operator.
 
 * MXNET_SUBGRAPH_BACKEND
-  - Values: String ```(default="MKLDNN")``` if ONEDNN is avaliable, otherwise ```(default="")```
+  - Values: String ```(default="DNNL")``` if ONEDNN is avaliable, otherwise ```(default="")```

Review comment:
       Done.

##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -552,9 +552,9 @@ def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
         raise ValueError('currently only supports single ctx, while received %s' % str(ctx))
     if ctx.device_type != 'cpu':
         raise ValueError(
-            'quantize_model_mkldnn only support Intel cpu platform with MKL-DNN Backend')
+            'quantize_model_dnnl only support Intel cpu platform with DNNL Backend')

Review comment:
       Done.

##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -552,9 +552,9 @@ def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
         raise ValueError('currently only supports single ctx, while received %s' % str(ctx))
     if ctx.device_type != 'cpu':
         raise ValueError(
-            'quantize_model_mkldnn only support Intel cpu platform with MKL-DNN Backend')
+            'quantize_model_dnnl only support Intel cpu platform with DNNL Backend')
 
-    sym = sym.optimize_for(backend='MKLDNN_QUANTIZE')
+    sym = sym.optimize_for(backend='DNNL_QUANTIZE')

Review comment:
       Done.

##########
File path: src/operator/subgraph/dnnl/dnnl_fc.cc
##########
@@ -654,14 +651,14 @@ static bool SgMKLDNNAvoidFCQuantizeInput(const NodeAttrs& attrs,
   return avoid_indexes.count(index_to_check);
 }
 
-NNVM_REGISTER_OP(_sg_mkldnn_fully_connected)
-    .describe(R"code(_sg_mkldnn_fully_connected)code" ADD_FILELINE)
+NNVM_REGISTER_OP(_sg_dnnl_fully_connected)

Review comment:
       > `_sg_dnnl_fully_connected` -> `_sg_onednn_fully_connected`
   
   Done.

##########
File path: python/mxnet/amp/lists/symbol_fp16.py
##########
@@ -611,10 +611,10 @@
 
 if Features().is_enabled('ONEDNN'):
     FP32_FUNCS.extend([
-        '_sg_mkldnn_conv',
-        '_sg_mkldnn_fully_connected',
-        '_sg_mkldnn_selfatt_qk',
-        '_sg_mkldnn_selfatt_valatt',
+        '_sg_dnnl_conv',
+        '_sg_dnnl_fully_connected',
+        '_sg_dnnl_selfatt_qk',
+        '_sg_dnnl_selfatt_valatt',

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_bn_relu_property.h
##########
@@ -106,7 +105,7 @@ class SgMKLDNNBNReLUProperty : public SubgraphProperty {
     nnvm::ObjectPtr n = nnvm::Node::Create();
 
     std::ostringstream node_name;
-    node_name << "sg_mkldnn_batch_norm_relu_" << std::to_string(subgraph_id);
+    node_name << "sg_dnnl_batch_norm_relu_" << std::to_string(subgraph_id);

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_bn_relu_property.h
##########
@@ -91,8 +90,8 @@ class SgMKLDNNBNReLUProperty : public SubgraphProperty {
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN BN + ReLU optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNBNReLUProperty>();
+    static const std::string& name = "DNNL BN + ReLU optimization pass";

Review comment:
       done

##########
File path: python/mxnet/contrib/quantization.py
##########
@@ -527,13 +527,13 @@ def quantize_model(sym, arg_params, aux_params, data_names=('data',),
 
     return qsym, qarg_params, aux_params
 
-def quantize_model_mkldnn(sym, arg_params, aux_params, data_names=('data',),
-                          ctx=cpu(), excluded_sym_names=None, excluded_op_names=None,
-                          calib_mode='entropy', calib_data=None, num_calib_batches=None,
-                          quantized_dtype='int8', quantize_mode='smart',
-                          quantize_granularity='tensor-wise', logger=None):
+def quantize_model_dnnl(sym, arg_params, aux_params, data_names=('data',),

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_conv.cc
##########
@@ -686,23 +682,22 @@ static bool SgMKLDNNConvOpStorageType(const nnvm::NodeAttrs& attrs,
   }
 }
 
-std::vector<std::pair<int, int>> SgMKLDNNConvInplaceOption(const NodeAttrs& attrs) {
-  auto const& param = nnvm::get<MKLDNNConvFusionParam>(attrs.parsed);
-  if (param.full_conv_param.mkldnn_param.with_sum &&
-      !param.full_conv_param.mkldnn_param.dedup_sum) {
+std::vector<std::pair<int, int>> SgDNNLConvInplaceOption(const NodeAttrs& attrs) {
+  auto const& param = nnvm::get<DNNLConvFusionParam>(attrs.parsed);
+  if (param.full_conv_param.dnnl_param.with_sum && !param.full_conv_param.dnnl_param.dedup_sum) {
     return std::vector<std::pair<int, int>>{{GetInSumIndex(param), 0}};
   } else {
     return std::vector<std::pair<int, int>>();
   }
 }
 
-nnvm::ObjectPtr SgMKLDNNConvQuantizedOp(const NodeAttrs& attrs) {
-  auto const& param    = nnvm::get<MKLDNNConvFusionParam>(attrs.parsed);
+nnvm::ObjectPtr SgDNNLConvQuantizedOp(const NodeAttrs& attrs) {
+  auto const& param    = nnvm::get<DNNLConvFusionParam>(attrs.parsed);
   nnvm::ObjectPtr node = nnvm::Node::Create();
-  node->attrs.op       = Op::Get("_sg_mkldnn_conv");
+  node->attrs.op       = Op::Get("_sg_dnnl_conv");
   const int k_ndims    = param.full_conv_param.conv_param.kernel.ndim();
   CHECK(k_ndims == 2U || k_ndims == 3U)
-      << "Quantized Convolution of MKL-DNN supports 2D/3D kernel currently."
+      << "Quantized Convolution of DNNL supports 2D/3D kernel currently."

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_conv_property.h
##########
@@ -170,25 +165,25 @@ class SgMKLDNNConvSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNConvSelector(
+    auto new_selector = SgDNNLConvSelector(
         disable_all_, disable_conv_bn_, disable_conv_act_, disable_conv_sum_, quantize_);
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNConvProperty : public SubgraphProperty {
+class SgDNNLConvProperty : public SubgraphProperty {
  public:
-  SgMKLDNNConvProperty() {
+  SgDNNLConvProperty() {
     disable_conv_bn_  = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_CONV_BN", 0);
     disable_conv_act_ = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_CONV_RELU", 0);
     disable_conv_sum_ = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_CONV_SUM", 0);
 
     disable_all_ = disable_conv_bn_ && disable_conv_act_ && disable_conv_sum_;
   }
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN convolution optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNConvProperty>();
+    static const std::string& name = "DNNL convolution optimization pass";

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_conv_property.h
##########
@@ -204,7 +199,7 @@ class SgMKLDNNConvProperty : public SubgraphProperty {
     nnvm::Symbol new_sym;
     new_sym.outputs.emplace_back(last_node);
     std::ostringstream node_name;
-    node_name << "sg_mkldnn_";
+    node_name << "sg_dnnl_";

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_elemwisemul_post_quantize_property.h
##########
@@ -161,7 +160,7 @@ class ElemwiseMulPostQuantizeProperty : public SubgraphProperty {
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN EltwiseMul post-quantization optimization pass";
+    static const std::string& name = "DNNL EltwiseMul post-quantization optimization pass";

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_fc_post_quantize_property.h
##########
@@ -146,22 +145,22 @@ class SgMKLDNNFCPostQuantizeSelector : public SubgraphSelectorV2 {
 
   void Reset() override {
     CHECK_GE(matched_list.size(), 1);
-    auto new_selector = SgMKLDNNFCPostQuantizeSelector(disable_all, disable_float_output);
+    auto new_selector = SgDNNLFCPostQuantizeSelector(disable_all, disable_float_output);
     new_selector.Select(*matched_list[0]);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNFCPostQuantizeProperty : public SubgraphProperty {
+class SgDNNLFCPostQuantizeProperty : public SubgraphProperty {
  public:
-  SgMKLDNNFCPostQuantizeProperty() {
+  SgDNNLFCPostQuantizeProperty() {
     disable_fuse_all     = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_QFC_FUSE_ALL", false);
     disable_float_output = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_QFC_FLOAT_OUTPUT", false);
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN FullyConected post-quantization optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNFCPostQuantizeProperty>();
+    static const std::string& name = "DNNL FullyConected post-quantization optimization pass";

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_fc_property.h
##########
@@ -156,21 +155,21 @@ class SgMKLDNNFCSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNFCSelector(disable_fc_eltwise_, quantized_);
+    auto new_selector = SgDNNLFCSelector(disable_fc_eltwise_, quantized_);
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNFCProperty : public SubgraphProperty {
+class SgDNNLFCProperty : public SubgraphProperty {
  public:
-  SgMKLDNNFCProperty() {
+  SgDNNLFCProperty() {
     disable_fc_eltwise_ = dmlc::GetEnv("MXNET_DISABLE_ONEDNN_FUSE_FC_ELTWISE", false);
   }
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN FullyConnected optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNFCProperty>();
+    static const std::string& name = "DNNL FullyConnected optimization pass";

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_post_quantize_align_scale_property.h
##########
@@ -117,13 +116,13 @@ class SgMKLDNNConcatPostQuantizeSelector : public SubgraphSelectorV2 {
   std::unordered_set<const nnvm::Node*> visit_list_;
 };
 
-class SgMKLDNNPostQuantizeAlignScaleProperty : public SubgraphProperty {
+class SgDNNLPostQuantizeAlignScaleProperty : public SubgraphProperty {
  public:
-  SgMKLDNNPostQuantizeAlignScaleProperty() : SubgraphProperty(kAdjust) {}
+  SgDNNLPostQuantizeAlignScaleProperty() : SubgraphProperty(kAdjust) {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN post-quantization scale alignment optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNPostQuantizeAlignScaleProperty>();
+    static const std::string& name = "DNNL post-quantization scale alignment optimization pass";

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_post_quantize_property.h
##########
@@ -112,22 +111,22 @@ class SgMKLDNNPostQuantizeSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list.size(), 1);
-    auto new_selector = SgMKLDNNPostQuantizeSelector();
+    auto new_selector = SgDNNLPostQuantizeSelector();
     new_selector.Select(*matched_list[0]);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNPostQuantizeProperty : public SubgraphProperty {
+class SgDNNLPostQuantizeProperty : public SubgraphProperty {
  public:
-  SgMKLDNNPostQuantizeProperty() {
-    support_requantize_fusion_op_name.insert("_sg_mkldnn_conv");
+  SgDNNLPostQuantizeProperty() {
+    support_requantize_fusion_op_name.insert("_sg_dnnl_conv");
     support_requantize_fusion_op_name.insert("_contrib_quantized_elemwise_add");
     support_requantize_fusion_op_name.insert("_contrib_quantized_npi_add");
   }
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN post-quantization optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNPostQuantizeProperty>();
+    static const std::string& name = "DNNL post-quantization optimization pass";

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer.cc
##########
@@ -123,7 +122,7 @@ class SgMKLDNNSelfAttQKOp {
                 const std::vector<NDArray>& inputs,
                 const std::vector<OpReqType>& req,
                 const std::vector<NDArray>& outputs) {
-    LOG(FATAL) << "Not implemented: subgraph mkldnn self attention qk only supports "
+    LOG(FATAL) << "Not implemented: subgraph dnnl self attention qk only supports "

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer.cc
##########
@@ -490,7 +489,7 @@ class MKLDNNSelfAttValAttOp {
                 const std::vector<NDArray>& inputs,
                 const std::vector<OpReqType>& req,
                 const std::vector<NDArray>& outputs) {
-    LOG(FATAL) << "Not implemented: subgraph mkldnn self attention val only supports "
+    LOG(FATAL) << "Not implemented: subgraph dnnl self attention val only supports "

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer_qk_property.h
##########
@@ -153,22 +152,22 @@ class SgMKLDNNTransformerQKSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNTransformerQKSelector();
+    auto new_selector = SgDNNLTransformerQKSelector();
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNTransformerQKProperty : public SubgraphProperty {
+class SgDNNLTransformerQKProperty : public SubgraphProperty {
  public:
-  SgMKLDNNTransformerQKProperty() {}
+  SgDNNLTransformerQKProperty() {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN Transformer optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNTransformerQKProperty>();
+    static const std::string& name = "DNNL Transformer optimization pass";

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer_qk_property.h
##########
@@ -153,22 +152,22 @@ class SgMKLDNNTransformerQKSelector : public SubgraphSelector {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNTransformerQKSelector();
+    auto new_selector = SgDNNLTransformerQKSelector();
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNTransformerQKProperty : public SubgraphProperty {
+class SgDNNLTransformerQKProperty : public SubgraphProperty {
  public:
-  SgMKLDNNTransformerQKProperty() {}
+  SgDNNLTransformerQKProperty() {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN Transformer optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNTransformerQKProperty>();
+    static const std::string& name = "DNNL Transformer optimization pass";
+    auto property                  = std::make_shared<SgDNNLTransformerQKProperty>();
     property->SetAttr<std::string>("property_name", name);
     property->SetAttr<bool>("inference_only", true);
-    if (dmlc::GetEnv("MXNET_DISABLE_MKLDNN_TRANSFORMER_OPT", 0)) {
+    if (dmlc::GetEnv("MXNET_DISABLE_DNNL_TRANSFORMER_OPT", 0)) {

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_transformer_valatt_property.h
##########
@@ -227,22 +226,22 @@ class SgMKLDNNTransformerValAttSelector : public SubgraphSelectorV2 {
 
   void Reset() override {
     CHECK_GE(matched_list_.size(), 1);
-    auto new_selector = SgMKLDNNTransformerValAttSelector();
+    auto new_selector = SgDNNLTransformerValAttSelector();
     new_selector.Select(*matched_list_[0], nullptr);
     *this = new_selector;
   }
 };
 
-class SgMKLDNNTransformerValAttProperty : public SubgraphProperty {
+class SgDNNLTransformerValAttProperty : public SubgraphProperty {
  public:
-  SgMKLDNNTransformerValAttProperty() {}
+  SgDNNLTransformerValAttProperty() {}
 
   static SubgraphPropertyPtr Create() {
-    static const std::string& name = "MKLDNN Transformer optimization pass";
-    auto property                  = std::make_shared<SgMKLDNNTransformerValAttProperty>();
+    static const std::string& name = "DNNL Transformer optimization pass";

Review comment:
       done

##########
File path: src/operator/subgraph/dnnl/dnnl_subgraph_property.cc
##########
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#if MXNET_USE_ONEDNN == 1
+
+#include "dnnl_bn_relu_property.h"
+#include "dnnl_conv_property.h"
+#include "dnnl_elemwisemul_post_quantize_property.h"
+#include "dnnl_fc_post_quantize_property.h"
+#include "dnnl_fc_property.h"
+#include "dnnl_post_quantize_align_scale_property.h"
+#include "dnnl_post_quantize_property.h"
+#include "dnnl_transformer_post_quantize_property.h"
+#include "dnnl_transformer_qk_property.h"
+#include "dnnl_transformer_valatt_property.h"
+
+namespace mxnet {
+namespace op {
+
+MXNET_REGISTER_SUBGRAPH_BACKEND(DNNL)

Review comment:
       done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20606: [submodule] Remove soon to be obsolete dnnl nomenclature from mxnet

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20606:
URL: https://github.com/apache/incubator-mxnet/pull/20606#issuecomment-926543986


   Hey @bartekkuncer , Thanks for submitting the PR 
   All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands: 
   - To trigger all jobs: @mxnet-bot run ci [all] 
   - To trigger specific jobs: @mxnet-bot run ci [job1, job2] 
   *** 
   **CI supported jobs**: [unix-cpu, sanity, edge, centos-cpu, website, clang, windows-gpu, unix-gpu, centos-gpu, windows-cpu, miscellaneous]
   *** 
   _Note_: 
    Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin. 
   All CI tests must pass before the PR can be merged. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org