You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2022/01/28 14:27:43 UTC

[GitHub] [incubator-mxnet] bartekkuncer opened a new pull request #20857: [v1.9.x][submodule] Upgrade oneDNN to v2.4.4

bartekkuncer opened a new pull request #20857:
URL: https://github.com/apache/incubator-mxnet/pull/20857


   ## Description ##
   This change upgrades oneDNN used on v1.9.x branch to v2.4.4.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bgawrych commented on a change in pull request #20857: [v1.9.x][submodule] Upgrade oneDNN to v2.4.4

Posted by GitBox <gi...@apache.org>.
bgawrych commented on a change in pull request #20857:
URL: https://github.com/apache/incubator-mxnet/pull/20857#discussion_r795716939



##########
File path: src/operator/nn/mkldnn/mkldnn_convolution.cc
##########
@@ -112,34 +112,41 @@ std::shared_ptr<mkldnn::convolution_forward::primitive_desc> GetConvFwdImpl(
     int mask = (param.requantize_scales.size() > 1) ? 2 : 0;
     attr.set_output_scales(mask, param.requantize_scales);
   }
-  auto GetConvFwdPd = [&param, &data, &weights, &output,
-                       &attr](const mkldnn::convolution_forward::desc &desc) {
-    auto engine = CpuEngine::Get()->get_engine();
-    try {
-      // MKL-DNN introduced padded formats since 0.15 which require more memory
-      // compared to the actual size of the tensor. Currently, MKL-DNN operators
-      // still reuse memory from memory planning, so here we need to select a
-      // suboptimal kernel for computation that has the expected memory size requirements
-      auto conv_pd =
-          std::make_shared<mkldnn::convolution_forward::primitive_desc>(desc, attr, engine);
-      while (conv_pd->dst_desc().get_size() != GetArraySize(output) ||
-             conv_pd->src_desc().get_size() != GetArraySize(data) ||
-             (!param.mkldnn_param.quantized &&
-              conv_pd->weights_desc().get_size() != GetArraySize(weights))) {
-        // next_impl() will visit desc and engine, please make sure they are still alive here.
-        CHECK(conv_pd->next_impl()) << "No convolution implementation for this request.";
-      }
-      return conv_pd;
-    } catch (mkldnn::error &e) {
-      if (e.status == mkldnn_unimplemented && param.mkldnn_param.quantized) {
-        LOG(ERROR) << "AVX512-BW support or Intel(R) MKL dependency is "
-                      "required for int8 convolution";
-      } else {
-        LOG(ERROR) << e.message;
-      }
-      throw;
-    }
-  };
+  auto GetConvFwdPd =
+      [&param, &data, &weights, &output, &attr](const mkldnn::convolution_forward::desc& desc) {
+        auto engine = CpuEngine::Get()->get_engine();
+        try {
+          // MKLDNN introduced padded formats since 0.15 which require more memory compared to the
+          // actual size of the tensor. Currently, MKLDNN operators still reuse memory from memory
+          // planning, so here we need to select a suboptimal kernel for computation that has the
+          // expected memory size requirements
+          auto conv_pd =
+              std::make_shared<mkldnn::convolution_forward::primitive_desc>(desc, attr, engine);
+          while (conv_pd->dst_desc().get_size() != GetArraySize(output) ||
+                 conv_pd->src_desc().get_size() != GetArraySize(data) ||
+                 (!param.mkldnn_param.quantized &&
+                  conv_pd->weights_desc().get_size() != GetArraySize(weights)) ||
+                 // With the upgrade of MKLDNN to version 2.4+
+                 // tests/python/mkl/test_subgraph.py::test_pos_conv_add started failing. Switching
+                 // away from primitive with weight mkldnn::format_tag ABcd4b16a4b in order to
+                 // temporairly fix the issue until full fix arrives. Tracking issue:

Review comment:
       ```suggestion
                    // temporarily fix the issue until full fix arrives. Tracking issue:
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] josephevans merged pull request #20857: [v1.9.x][submodule] Upgrade oneDNN to v2.4.4

Posted by GitBox <gi...@apache.org>.
josephevans merged pull request #20857:
URL: https://github.com/apache/incubator-mxnet/pull/20857


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20857: [v1.9.x][submodule] Upgrade oneDNN to v2.4.4

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20857:
URL: https://github.com/apache/incubator-mxnet/pull/20857#issuecomment-1024275443


   Hey @bartekkuncer , Thanks for submitting the PR 
   All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands: 
   - To trigger all jobs: @mxnet-bot run ci [all] 
   - To trigger specific jobs: @mxnet-bot run ci [job1, job2] 
   *** 
   **CI supported jobs**: [windows-cpu, edge, website, unix-gpu, clang, sanity, centos-cpu, unix-cpu, centos-gpu, miscellaneous, windows-gpu]
   *** 
   _Note_: 
    Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin. 
   All CI tests must pass before the PR can be merged. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org