You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2022/03/25 00:47:59 UTC

[GitHub] [incubator-mxnet] bartekkuncer opened a new pull request #20981: [FEATURE] Add oneDNN support for npi: log, exp, square and sqrt

bartekkuncer opened a new pull request #20981:
URL: https://github.com/apache/incubator-mxnet/pull/20981


   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   - [ ] PR's title starts with a category (e.g. [BUGFIX], [MODEL], [TUTORIAL], [FEATURE], [DOC], etc)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] Code is well-documented
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be made.
   - Interesting edge cases to note here
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20981: [FEATURE] Add oneDNN support for npi: exp, square and sqrt

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20981:
URL: https://github.com/apache/incubator-mxnet/pull/20981#discussion_r837337328



##########
File path: src/operator/nn/dnnl/dnnl_eltwise-inl.h
##########
@@ -50,17 +52,18 @@ class DNNLTanhFwd {
   std::shared_ptr<eltwise_fwd_pd_t> fwd_pd;
 };
 
-inline void DNNLTanhForward(const nnvm::NodeAttrs& attrs,
-                            const OpContext& ctx,
-                            const NDArray& input,
-                            const OpReqType& req,
-                            const NDArray& output) {
-  DNNLTanhFwd& fwd = DNNLTanhFwd::GetTanhForward(input, output);
+template <dnnl::algorithm algorithm>
+inline void DNNLEltwiseForward(const nnvm::NodeAttrs& attrs,

Review comment:
       DNNLRun function takes only 5 parameters and using template allows for sixth.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on pull request #20981: [FEATURE] Add oneDNN support for npi: log, exp, square and sqrt

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on pull request #20981:
URL: https://github.com/apache/incubator-mxnet/pull/20981#issuecomment-1081056850


   @mxnet-bot run ci [centos-gpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] PawelGlomski-Intel commented on a change in pull request #20981: [FEATURE] Add oneDNN support for npi: exp, square and sqrt

Posted by GitBox <gi...@apache.org>.
PawelGlomski-Intel commented on a change in pull request #20981:
URL: https://github.com/apache/incubator-mxnet/pull/20981#discussion_r837385725



##########
File path: src/operator/tensor/elemwise_unary_op.h
##########
@@ -457,30 +457,67 @@ class UnaryOp : public OpBase {
 };
 
 #if MXNET_USE_ONEDNN == 1
-inline bool TanhStorageType(const nnvm::NodeAttrs& attrs,
-                            const int dev_mask,
-                            DispatchMode* dispatch_mode,
-                            std::vector<int>* in_attrs,
-                            std::vector<int>* out_attrs) {
+inline bool EltwiseStorageType(const nnvm::NodeAttrs& attrs,
+                               const int dev_mask,
+                               DispatchMode* dispatch_mode,
+                               std::vector<int>* in_attrs,
+                               std::vector<int>* out_attrs) {
   CHECK_EQ(in_attrs->size(), 1);
   CHECK_EQ(out_attrs->size(), 1);
 
   return DNNLStorageType(attrs, dev_mask, true, dispatch_mode, in_attrs, out_attrs);
 }
 
-inline void TanhComputeExCPU(const nnvm::NodeAttrs& attrs,
-                             const OpContext& ctx,
-                             const std::vector<mxnet::NDArray>& inputs,
-                             const std::vector<OpReqType>& req,
-                             const std::vector<mxnet::NDArray>& outputs) {
-  if (SupportDNNLTanh(inputs[0], outputs[0])) {
+template <typename OP>
+struct DNNLAlgorithm {};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::plus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_add;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::minus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_sub;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::mul> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_mul;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::div> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_div;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::tanh> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_tanh;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::exp> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_exp;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_square;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square_root> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_sqrt;
+};

Review comment:
       Here is the function if you are intrerested:
   ```c++
   template <class>
   inline constexpr bool dependent_false_v = false;
   template <typename MXNetOP>
   constexpr dnnl::algorithm DNNLAlgorithm() {
     if constexpr (std::is_same_v<MXNetOP, op::mshadow_op::plus>) {
       return dnnl::algorithm::binary_add;
     } else if (std::is_same_v<MXNetOP, op::mshadow_op::minus>) {
       return dnnl::algorithm::binary_sub;
     } else if (std::is_same_v<MXNetOP, op::mshadow_op::mul>) {
       return dnnl::algorithm::binary_mul;
     } else if (std::is_same_v<MXNetOP, op::mshadow_op::div>) {
       return dnnl::algorithm::binary_div;
     } else if (std::is_same_v<MXNetOP, op::mshadow_op::tanh>) {
       return dnnl::algorithm::eltwise_tanh;
     } else if (std::is_same_v<MXNetOP, op::mshadow_op::exp>) {
       return dnnl::algorithm::eltwise_exp;
     } else if (std::is_same_v<MXNetOP, op::mshadow_op::square>) {
       return dnnl::algorithm::eltwise_square;
     } else if (std::is_same_v<MXNetOP, op::mshadow_op::square_root>) {
       return dnnl::algorithm::eltwise_sqrt;
     } else {
       static_assert(dependent_false_v<MXNetOP>, "Error");
       return dnnl::algorithm();
     }
   }
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bgawrych merged pull request #20981: [FEATURE] Add oneDNN support for npi: exp, square and sqrt

Posted by GitBox <gi...@apache.org>.
bgawrych merged pull request #20981:
URL: https://github.com/apache/incubator-mxnet/pull/20981


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20981: [FEATURE] Add oneDNN support for npi: exp, square and sqrt

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20981:
URL: https://github.com/apache/incubator-mxnet/pull/20981#discussion_r837631965



##########
File path: src/operator/tensor/elemwise_unary_op.h
##########
@@ -457,30 +457,67 @@ class UnaryOp : public OpBase {
 };
 
 #if MXNET_USE_ONEDNN == 1
-inline bool TanhStorageType(const nnvm::NodeAttrs& attrs,
-                            const int dev_mask,
-                            DispatchMode* dispatch_mode,
-                            std::vector<int>* in_attrs,
-                            std::vector<int>* out_attrs) {
+inline bool EltwiseStorageType(const nnvm::NodeAttrs& attrs,
+                               const int dev_mask,
+                               DispatchMode* dispatch_mode,
+                               std::vector<int>* in_attrs,
+                               std::vector<int>* out_attrs) {
   CHECK_EQ(in_attrs->size(), 1);
   CHECK_EQ(out_attrs->size(), 1);
 
   return DNNLStorageType(attrs, dev_mask, true, dispatch_mode, in_attrs, out_attrs);
 }
 
-inline void TanhComputeExCPU(const nnvm::NodeAttrs& attrs,
-                             const OpContext& ctx,
-                             const std::vector<mxnet::NDArray>& inputs,
-                             const std::vector<OpReqType>& req,
-                             const std::vector<mxnet::NDArray>& outputs) {
-  if (SupportDNNLTanh(inputs[0], outputs[0])) {
+template <typename OP>
+struct DNNLAlgorithm {};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::plus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_add;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::minus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_sub;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::mul> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_mul;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::div> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_div;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::tanh> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_tanh;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::exp> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_exp;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_square;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square_root> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_sqrt;
+};

Review comment:
       It is used also in src/operator/numpy/np_elemwise_broadcast_op.h which already includes this file therefore I prefer to keep it here to not have to add additional include there as well as an include to dnnl_eltwise-inl.h header to make it work with mshadow_op.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bgawrych commented on a change in pull request #20981: [FEATURE] Add oneDNN support for npi: exp, square and sqrt

Posted by GitBox <gi...@apache.org>.
bgawrych commented on a change in pull request #20981:
URL: https://github.com/apache/incubator-mxnet/pull/20981#discussion_r837551810



##########
File path: src/operator/tensor/elemwise_unary_op.h
##########
@@ -457,30 +457,67 @@ class UnaryOp : public OpBase {
 };
 
 #if MXNET_USE_ONEDNN == 1
-inline bool TanhStorageType(const nnvm::NodeAttrs& attrs,
-                            const int dev_mask,
-                            DispatchMode* dispatch_mode,
-                            std::vector<int>* in_attrs,
-                            std::vector<int>* out_attrs) {
+inline bool EltwiseStorageType(const nnvm::NodeAttrs& attrs,
+                               const int dev_mask,
+                               DispatchMode* dispatch_mode,
+                               std::vector<int>* in_attrs,
+                               std::vector<int>* out_attrs) {
   CHECK_EQ(in_attrs->size(), 1);
   CHECK_EQ(out_attrs->size(), 1);
 
   return DNNLStorageType(attrs, dev_mask, true, dispatch_mode, in_attrs, out_attrs);
 }
 
-inline void TanhComputeExCPU(const nnvm::NodeAttrs& attrs,
-                             const OpContext& ctx,
-                             const std::vector<mxnet::NDArray>& inputs,
-                             const std::vector<OpReqType>& req,
-                             const std::vector<mxnet::NDArray>& outputs) {
-  if (SupportDNNLTanh(inputs[0], outputs[0])) {
+template <typename OP>
+struct DNNLAlgorithm {};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::plus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_add;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::minus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_sub;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::mul> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_mul;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::div> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_div;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::tanh> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_tanh;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::exp> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_exp;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_square;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square_root> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_sqrt;
+};

Review comment:
       Can it be moved to some dnnl_eltwise-inl.h header?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20981: [FEATURE] Add oneDNN support for npi: log, exp, square and sqrt

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20981:
URL: https://github.com/apache/incubator-mxnet/pull/20981#issuecomment-1078546654


   Hey @bartekkuncer , Thanks for submitting the PR 
   All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands: 
   - To trigger all jobs: @mxnet-bot run ci [all] 
   - To trigger specific jobs: @mxnet-bot run ci [job1, job2] 
   *** 
   **CI supported jobs**: [website, windows-cpu, centos-gpu, unix-gpu, windows-gpu, centos-cpu, unix-cpu, clang, edge, sanity, miscellaneous]
   *** 
   _Note_: 
    Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin. 
   All CI tests must pass before the PR can be merged. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] PawelGlomski-Intel commented on a change in pull request #20981: [FEATURE] Add oneDNN support for npi: exp, square and sqrt

Posted by GitBox <gi...@apache.org>.
PawelGlomski-Intel commented on a change in pull request #20981:
URL: https://github.com/apache/incubator-mxnet/pull/20981#discussion_r837304928



##########
File path: src/operator/tensor/elemwise_unary_op.h
##########
@@ -457,30 +457,67 @@ class UnaryOp : public OpBase {
 };
 
 #if MXNET_USE_ONEDNN == 1
-inline bool TanhStorageType(const nnvm::NodeAttrs& attrs,
-                            const int dev_mask,
-                            DispatchMode* dispatch_mode,
-                            std::vector<int>* in_attrs,
-                            std::vector<int>* out_attrs) {
+inline bool EltwiseStorageType(const nnvm::NodeAttrs& attrs,
+                               const int dev_mask,
+                               DispatchMode* dispatch_mode,
+                               std::vector<int>* in_attrs,
+                               std::vector<int>* out_attrs) {
   CHECK_EQ(in_attrs->size(), 1);
   CHECK_EQ(out_attrs->size(), 1);
 
   return DNNLStorageType(attrs, dev_mask, true, dispatch_mode, in_attrs, out_attrs);
 }
 
-inline void TanhComputeExCPU(const nnvm::NodeAttrs& attrs,
-                             const OpContext& ctx,
-                             const std::vector<mxnet::NDArray>& inputs,
-                             const std::vector<OpReqType>& req,
-                             const std::vector<mxnet::NDArray>& outputs) {
-  if (SupportDNNLTanh(inputs[0], outputs[0])) {
+template <typename OP>
+struct DNNLAlgorithm {};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::plus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_add;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::minus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_sub;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::mul> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_mul;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::div> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_div;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::tanh> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_tanh;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::exp> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_exp;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_square;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square_root> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_sqrt;
+};

Review comment:
       I believe we can turn `DNNLAlgorithm` into a constexpr function which would be more readable imo.

##########
File path: src/operator/tensor/elemwise_unary_op.h
##########
@@ -457,30 +457,67 @@ class UnaryOp : public OpBase {
 };
 
 #if MXNET_USE_ONEDNN == 1
-inline bool TanhStorageType(const nnvm::NodeAttrs& attrs,
-                            const int dev_mask,
-                            DispatchMode* dispatch_mode,
-                            std::vector<int>* in_attrs,
-                            std::vector<int>* out_attrs) {
+inline bool EltwiseStorageType(const nnvm::NodeAttrs& attrs,
+                               const int dev_mask,
+                               DispatchMode* dispatch_mode,
+                               std::vector<int>* in_attrs,
+                               std::vector<int>* out_attrs) {
   CHECK_EQ(in_attrs->size(), 1);
   CHECK_EQ(out_attrs->size(), 1);
 
   return DNNLStorageType(attrs, dev_mask, true, dispatch_mode, in_attrs, out_attrs);
 }
 
-inline void TanhComputeExCPU(const nnvm::NodeAttrs& attrs,
-                             const OpContext& ctx,
-                             const std::vector<mxnet::NDArray>& inputs,
-                             const std::vector<OpReqType>& req,
-                             const std::vector<mxnet::NDArray>& outputs) {
-  if (SupportDNNLTanh(inputs[0], outputs[0])) {
+template <typename OP>
+struct DNNLAlgorithm {};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::plus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_add;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::minus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_sub;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::mul> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_mul;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::div> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_div;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::tanh> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_tanh;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::exp> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_exp;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_square;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square_root> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_sqrt;
+};
+
+template <typename OP, bool computeMixed = true>
+inline void EltwiseComputeExCPU(const nnvm::NodeAttrs& attrs,
+                                const OpContext& ctx,
+                                const std::vector<mxnet::NDArray>& inputs,
+                                const std::vector<OpReqType>& req,
+                                const std::vector<mxnet::NDArray>& outputs) {
+  auto fallBackFunction =
+      computeMixed ? UnaryOp::ComputeMixedType<cpu, OP> : UnaryOp::Compute<cpu, OP>;
+  if (SupportDNNLEltwise(inputs[0], outputs[0])) {
     DNNL_OPCHECK_INIT(false, outputs.size(), inputs, outputs);
-    DNNLRun(DNNLTanhForward, attrs, ctx, inputs[0], req[0], outputs[0]);
-    DNNL_OPCHECK_RUN(
-        (UnaryOp::ComputeMixedType<cpu, mshadow_op::tanh>), attrs, ctx, inputs, req, outputs);
+    DNNLRun(
+        DNNLEltwiseForward<DNNLAlgorithm<OP>::value>, attrs, ctx, inputs[0], req[0], outputs[0]);
+    DNNL_OPCHECK_RUN(fallBackFunction, attrs, ctx, inputs, req, outputs);
   } else {
-    FallBackCompute(
-        UnaryOp::ComputeMixedType<cpu, mshadow_op::tanh>, attrs, ctx, inputs, req, outputs);
+    FallBackCompute(fallBackFunction, attrs, ctx, inputs, req, outputs);

Review comment:
       ```suggestion
       if constexpr (computeMixed) {
           FallBackCompute(UnaryOp::ComputeMixedType<cpu, OP>, attrs, ctx, inputs, req, outputs);
       } else {
           FallBackCompute(UnaryOp::Compute<cpu, OP>, attrs, ctx, inputs, req, outputs);
       }
   ```

##########
File path: src/operator/nn/dnnl/dnnl_eltwise-inl.h
##########
@@ -50,17 +52,18 @@ class DNNLTanhFwd {
   std::shared_ptr<eltwise_fwd_pd_t> fwd_pd;
 };
 
-inline void DNNLTanhForward(const nnvm::NodeAttrs& attrs,
-                            const OpContext& ctx,
-                            const NDArray& input,
-                            const OpReqType& req,
-                            const NDArray& output) {
-  DNNLTanhFwd& fwd = DNNLTanhFwd::GetTanhForward(input, output);
+template <dnnl::algorithm algorithm>
+inline void DNNLEltwiseForward(const nnvm::NodeAttrs& attrs,

Review comment:
       Any reason why this is a template?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20981: [FEATURE] Add oneDNN support for npi: exp, square and sqrt

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20981:
URL: https://github.com/apache/incubator-mxnet/pull/20981#discussion_r837342015



##########
File path: src/operator/tensor/elemwise_unary_op.h
##########
@@ -457,30 +457,67 @@ class UnaryOp : public OpBase {
 };
 
 #if MXNET_USE_ONEDNN == 1
-inline bool TanhStorageType(const nnvm::NodeAttrs& attrs,
-                            const int dev_mask,
-                            DispatchMode* dispatch_mode,
-                            std::vector<int>* in_attrs,
-                            std::vector<int>* out_attrs) {
+inline bool EltwiseStorageType(const nnvm::NodeAttrs& attrs,
+                               const int dev_mask,
+                               DispatchMode* dispatch_mode,
+                               std::vector<int>* in_attrs,
+                               std::vector<int>* out_attrs) {
   CHECK_EQ(in_attrs->size(), 1);
   CHECK_EQ(out_attrs->size(), 1);
 
   return DNNLStorageType(attrs, dev_mask, true, dispatch_mode, in_attrs, out_attrs);
 }
 
-inline void TanhComputeExCPU(const nnvm::NodeAttrs& attrs,
-                             const OpContext& ctx,
-                             const std::vector<mxnet::NDArray>& inputs,
-                             const std::vector<OpReqType>& req,
-                             const std::vector<mxnet::NDArray>& outputs) {
-  if (SupportDNNLTanh(inputs[0], outputs[0])) {
+template <typename OP>
+struct DNNLAlgorithm {};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::plus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_add;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::minus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_sub;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::mul> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_mul;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::div> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_div;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::tanh> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_tanh;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::exp> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_exp;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_square;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square_root> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_sqrt;
+};
+
+template <typename OP, bool computeMixed = true>
+inline void EltwiseComputeExCPU(const nnvm::NodeAttrs& attrs,
+                                const OpContext& ctx,
+                                const std::vector<mxnet::NDArray>& inputs,
+                                const std::vector<OpReqType>& req,
+                                const std::vector<mxnet::NDArray>& outputs) {
+  auto fallBackFunction =
+      computeMixed ? UnaryOp::ComputeMixedType<cpu, OP> : UnaryOp::Compute<cpu, OP>;
+  if (SupportDNNLEltwise(inputs[0], outputs[0])) {
     DNNL_OPCHECK_INIT(false, outputs.size(), inputs, outputs);
-    DNNLRun(DNNLTanhForward, attrs, ctx, inputs[0], req[0], outputs[0]);
-    DNNL_OPCHECK_RUN(
-        (UnaryOp::ComputeMixedType<cpu, mshadow_op::tanh>), attrs, ctx, inputs, req, outputs);
+    DNNLRun(
+        DNNLEltwiseForward<DNNLAlgorithm<OP>::value>, attrs, ctx, inputs[0], req[0], outputs[0]);
+    DNNL_OPCHECK_RUN(fallBackFunction, attrs, ctx, inputs, req, outputs);
   } else {
-    FallBackCompute(
-        UnaryOp::ComputeMixedType<cpu, mshadow_op::tanh>, attrs, ctx, inputs, req, outputs);
+    FallBackCompute(fallBackFunction, attrs, ctx, inputs, req, outputs);

Review comment:
       fallBackFunction is used twice, therefore it exists.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] PawelGlomski-Intel commented on a change in pull request #20981: [FEATURE] Add oneDNN support for npi: exp, square and sqrt

Posted by GitBox <gi...@apache.org>.
PawelGlomski-Intel commented on a change in pull request #20981:
URL: https://github.com/apache/incubator-mxnet/pull/20981#discussion_r837382660



##########
File path: src/operator/tensor/elemwise_unary_op.h
##########
@@ -457,30 +457,67 @@ class UnaryOp : public OpBase {
 };
 
 #if MXNET_USE_ONEDNN == 1
-inline bool TanhStorageType(const nnvm::NodeAttrs& attrs,
-                            const int dev_mask,
-                            DispatchMode* dispatch_mode,
-                            std::vector<int>* in_attrs,
-                            std::vector<int>* out_attrs) {
+inline bool EltwiseStorageType(const nnvm::NodeAttrs& attrs,
+                               const int dev_mask,
+                               DispatchMode* dispatch_mode,
+                               std::vector<int>* in_attrs,
+                               std::vector<int>* out_attrs) {
   CHECK_EQ(in_attrs->size(), 1);
   CHECK_EQ(out_attrs->size(), 1);
 
   return DNNLStorageType(attrs, dev_mask, true, dispatch_mode, in_attrs, out_attrs);
 }
 
-inline void TanhComputeExCPU(const nnvm::NodeAttrs& attrs,
-                             const OpContext& ctx,
-                             const std::vector<mxnet::NDArray>& inputs,
-                             const std::vector<OpReqType>& req,
-                             const std::vector<mxnet::NDArray>& outputs) {
-  if (SupportDNNLTanh(inputs[0], outputs[0])) {
+template <typename OP>
+struct DNNLAlgorithm {};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::plus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_add;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::minus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_sub;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::mul> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_mul;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::div> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_div;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::tanh> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_tanh;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::exp> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_exp;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_square;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square_root> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_sqrt;
+};

Review comment:
       I thought `op::mshadow_op` is an enum, like dnnl::algorithm xd. We could still turn this into a function, but it wouldn't be that more readable anymore.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] PawelGlomski-Intel commented on a change in pull request #20981: [FEATURE] Add oneDNN support for npi: exp, square and sqrt

Posted by GitBox <gi...@apache.org>.
PawelGlomski-Intel commented on a change in pull request #20981:
URL: https://github.com/apache/incubator-mxnet/pull/20981#discussion_r837383591



##########
File path: src/operator/tensor/elemwise_unary_op.h
##########
@@ -457,30 +457,67 @@ class UnaryOp : public OpBase {
 };
 
 #if MXNET_USE_ONEDNN == 1
-inline bool TanhStorageType(const nnvm::NodeAttrs& attrs,
-                            const int dev_mask,
-                            DispatchMode* dispatch_mode,
-                            std::vector<int>* in_attrs,
-                            std::vector<int>* out_attrs) {
+inline bool EltwiseStorageType(const nnvm::NodeAttrs& attrs,
+                               const int dev_mask,
+                               DispatchMode* dispatch_mode,
+                               std::vector<int>* in_attrs,
+                               std::vector<int>* out_attrs) {
   CHECK_EQ(in_attrs->size(), 1);
   CHECK_EQ(out_attrs->size(), 1);
 
   return DNNLStorageType(attrs, dev_mask, true, dispatch_mode, in_attrs, out_attrs);
 }
 
-inline void TanhComputeExCPU(const nnvm::NodeAttrs& attrs,
-                             const OpContext& ctx,
-                             const std::vector<mxnet::NDArray>& inputs,
-                             const std::vector<OpReqType>& req,
-                             const std::vector<mxnet::NDArray>& outputs) {
-  if (SupportDNNLTanh(inputs[0], outputs[0])) {
+template <typename OP>
+struct DNNLAlgorithm {};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::plus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_add;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::minus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_sub;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::mul> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_mul;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::div> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_div;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::tanh> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_tanh;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::exp> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_exp;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_square;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square_root> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_sqrt;
+};
+
+template <typename OP, bool computeMixed = true>
+inline void EltwiseComputeExCPU(const nnvm::NodeAttrs& attrs,
+                                const OpContext& ctx,
+                                const std::vector<mxnet::NDArray>& inputs,
+                                const std::vector<OpReqType>& req,
+                                const std::vector<mxnet::NDArray>& outputs) {
+  auto fallBackFunction =
+      computeMixed ? UnaryOp::ComputeMixedType<cpu, OP> : UnaryOp::Compute<cpu, OP>;
+  if (SupportDNNLEltwise(inputs[0], outputs[0])) {
     DNNL_OPCHECK_INIT(false, outputs.size(), inputs, outputs);
-    DNNLRun(DNNLTanhForward, attrs, ctx, inputs[0], req[0], outputs[0]);
-    DNNL_OPCHECK_RUN(
-        (UnaryOp::ComputeMixedType<cpu, mshadow_op::tanh>), attrs, ctx, inputs, req, outputs);
+    DNNLRun(
+        DNNLEltwiseForward<DNNLAlgorithm<OP>::value>, attrs, ctx, inputs[0], req[0], outputs[0]);
+    DNNL_OPCHECK_RUN(fallBackFunction, attrs, ctx, inputs, req, outputs);
   } else {
-    FallBackCompute(
-        UnaryOp::ComputeMixedType<cpu, mshadow_op::tanh>, attrs, ctx, inputs, req, outputs);
+    FallBackCompute(fallBackFunction, attrs, ctx, inputs, req, outputs);

Review comment:
       I was thinking about removing the runtime check, but yea, it's not worth repeating the code.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20981: [FEATURE] Add oneDNN support for npi: exp, square and sqrt

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20981:
URL: https://github.com/apache/incubator-mxnet/pull/20981#discussion_r837383913



##########
File path: src/operator/tensor/elemwise_unary_op.h
##########
@@ -457,30 +457,67 @@ class UnaryOp : public OpBase {
 };
 
 #if MXNET_USE_ONEDNN == 1
-inline bool TanhStorageType(const nnvm::NodeAttrs& attrs,
-                            const int dev_mask,
-                            DispatchMode* dispatch_mode,
-                            std::vector<int>* in_attrs,
-                            std::vector<int>* out_attrs) {
+inline bool EltwiseStorageType(const nnvm::NodeAttrs& attrs,
+                               const int dev_mask,
+                               DispatchMode* dispatch_mode,
+                               std::vector<int>* in_attrs,
+                               std::vector<int>* out_attrs) {
   CHECK_EQ(in_attrs->size(), 1);
   CHECK_EQ(out_attrs->size(), 1);
 
   return DNNLStorageType(attrs, dev_mask, true, dispatch_mode, in_attrs, out_attrs);
 }
 
-inline void TanhComputeExCPU(const nnvm::NodeAttrs& attrs,
-                             const OpContext& ctx,
-                             const std::vector<mxnet::NDArray>& inputs,
-                             const std::vector<OpReqType>& req,
-                             const std::vector<mxnet::NDArray>& outputs) {
-  if (SupportDNNLTanh(inputs[0], outputs[0])) {
+template <typename OP>
+struct DNNLAlgorithm {};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::plus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_add;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::minus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_sub;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::mul> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_mul;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::div> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_div;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::tanh> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_tanh;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::exp> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_exp;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_square;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square_root> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_sqrt;
+};

Review comment:
       I like it as is :)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] bartekkuncer commented on a change in pull request #20981: [FEATURE] Add oneDNN support for npi: exp, square and sqrt

Posted by GitBox <gi...@apache.org>.
bartekkuncer commented on a change in pull request #20981:
URL: https://github.com/apache/incubator-mxnet/pull/20981#discussion_r837341071



##########
File path: src/operator/tensor/elemwise_unary_op.h
##########
@@ -457,30 +457,67 @@ class UnaryOp : public OpBase {
 };
 
 #if MXNET_USE_ONEDNN == 1
-inline bool TanhStorageType(const nnvm::NodeAttrs& attrs,
-                            const int dev_mask,
-                            DispatchMode* dispatch_mode,
-                            std::vector<int>* in_attrs,
-                            std::vector<int>* out_attrs) {
+inline bool EltwiseStorageType(const nnvm::NodeAttrs& attrs,
+                               const int dev_mask,
+                               DispatchMode* dispatch_mode,
+                               std::vector<int>* in_attrs,
+                               std::vector<int>* out_attrs) {
   CHECK_EQ(in_attrs->size(), 1);
   CHECK_EQ(out_attrs->size(), 1);
 
   return DNNLStorageType(attrs, dev_mask, true, dispatch_mode, in_attrs, out_attrs);
 }
 
-inline void TanhComputeExCPU(const nnvm::NodeAttrs& attrs,
-                             const OpContext& ctx,
-                             const std::vector<mxnet::NDArray>& inputs,
-                             const std::vector<OpReqType>& req,
-                             const std::vector<mxnet::NDArray>& outputs) {
-  if (SupportDNNLTanh(inputs[0], outputs[0])) {
+template <typename OP>
+struct DNNLAlgorithm {};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::plus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_add;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::minus> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_sub;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::mul> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_mul;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::div> {
+  static const dnnl::algorithm value = dnnl::algorithm::binary_div;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::tanh> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_tanh;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::exp> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_exp;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_square;
+};
+template <>
+struct DNNLAlgorithm<op::mshadow_op::square_root> {
+  static const dnnl::algorithm value = dnnl::algorithm::eltwise_sqrt;
+};

Review comment:
       op::mshadow_op::XXX is not a literal_type afaik.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20981: [FEATURE] Add oneDNN support for npi: log, exp, square and sqrt

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20981:
URL: https://github.com/apache/incubator-mxnet/pull/20981#issuecomment-1081056933


   Jenkins CI successfully triggered : [centos-gpu]


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@mxnet.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org