You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/01/09 07:00:28 UTC

[GitHub] [incubator-mxnet] Yiyan66 opened a new pull request #17254: [numpy] change unary infer type

Yiyan66 opened a new pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254
 
 
   ## Description ##
   change unary infer type
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments are documented. 
   - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
   - Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be made.
   - Interesting edge cases to note here
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r365556866
 
 

 ##########
 File path: src/operator/numpy/np_elemwise_unary_op_basic.cc
 ##########
 @@ -82,6 +82,39 @@ NNVM_REGISTER_OP(_np_copy)
   .set_attr<FCompute>("FCompute<cpu>", UnaryOp::Compute<cpu, __kernel$>)                  \
   .add_argument(__input_name$, "NDArray-or-Symbol", "The input array.")
 
+inline bool UnaryOpType(const nnvm::NodeAttrs& attrs,
+                              std::vector<int>* in_attrs,
+                              std::vector<int>* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  int a_type = in_attrs->at(0);
+  if (mxnet::common::is_float(a_type)) {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, in_attrs->at(0));
+  } else if (a_type == mshadow::kInt32 || a_type == mshadow::kInt64) {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat64);
 
 Review comment:
   use `kFloat32` as default for now.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r365556902
 
 

 ##########
 File path: src/operator/numpy/np_elemwise_unary_op_basic.cu
 ##########
 @@ -39,6 +39,10 @@ NNVM_REGISTER_OP(_np_copy)
   NNVM_REGISTER_OP(__name$)                                               \
   .set_attr<FCompute>("FCompute<gpu>", UnaryOp::Compute<gpu, __kernel$>)
 
+#define MXNET_OPERATOR_REGISTER_NUMPY_UNARY_GPU2(__name$, __kernel$)       \
 
 Review comment:
   `MXNET_OPERATOR_REGISTER_NUMPY_MIXED_TYPE_UNARY_GPU `

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364600688
 
 

 ##########
 File path: src/operator/numpy/np_elemwise_unary_op_basic.cc
 ##########
 @@ -82,6 +82,39 @@ NNVM_REGISTER_OP(_np_copy)
   .set_attr<FCompute>("FCompute<cpu>", UnaryOp::Compute<cpu, __kernel$>)                  \
   .add_argument(__input_name$, "NDArray-or-Symbol", "The input array.")
 
+inline bool UnaryOpType(const nnvm::NodeAttrs& attrs,
+                              std::vector<int>* in_attrs,
+                              std::vector<int>* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  int a_type = in_attrs->at(0);
+  if (mxnet::common::is_float(a_type)) {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, in_attrs->at(0));
+  } else if (a_type == mshadow::kInt32 || a_type == mshadow::kInt64) {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat64);
+  } else {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat16);
+  }
+  return out_attrs->at(0) != -1;
+}
+
+#define MXNET_OPERATOR_REGISTER_NUMPY_UNARY2(__name$, __input_name$, __kernel$)            \
 
 Review comment:
   also, better name for this one.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364604506
 
 

 ##########
 File path: tests/python/unittest/test_numpy_op.py
 ##########
 @@ -1832,6 +1832,35 @@ def hybrid_forward(self, F, a, *args, **kwargs):
 
     funcs = {
         'absolute' : (lambda x: -1. * (x < 0) + (x > 0), -1.0, 1.0),
+        'logical_not' : (None, -1.0, 1.0),
+        'negative' : (lambda x: -1. * _np.ones(x.shape), -1.0, 1.0),
+        'reciprocal' : (lambda x: -1. / (x ** 2), 0.01, 1.0),
+        'sign' : (None, -1.0, 1.0),
+        'square' : (lambda x: 2.0 * x, -1.0, 1.0),
+    }
+    if has_tvm_ops():
+        funcs['rad2deg'] = (lambda x: 180. / _np.pi * _np.ones(x.shape), -1.0, 1.0)
+        funcs['deg2rad'] = (lambda x: _np.pi / 180. * _np.ones(x.shape), -1.0, 1.0)
+    ndim = random.choice([2, 3, 4])
+    shape = random.choice([rand_shape_nd(ndim, dim=3), (1, 0, 2)])
+    for shape in [rand_shape_nd(ndim, dim=3), (1, 0, 2)]:
+        for func, func_data in funcs.items():
+            ref_grad, low, high = func_data
+            check_unary_func(func, ref_grad, shape, low, high)
+
+@with_seed()
 
 Review comment:
   1 more blank line above.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r365556934
 
 

 ##########
 File path: src/operator/tensor/elemwise_binary_op.h
 ##########
 @@ -525,6 +525,68 @@ class ElemwiseBinaryOp : public OpBase {
     });
   }
 
+  template<typename xpu, typename OP>
+  static void ComputeMixedTypeIn(const nnvm::NodeAttrs &attrs,
 
 Review comment:
   `MixedUnaryBackwardUseInCompute`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r381822843
 
 

 ##########
 File path: src/operator/numpy/np_elemwise_unary_op_basic.cc
 ##########
 @@ -82,6 +83,90 @@ NNVM_REGISTER_OP(_np_copy)
   .set_attr<FCompute>("FCompute<cpu>", UnaryOp::Compute<cpu, __kernel$>)                  \
   .add_argument(__input_name$, "NDArray-or-Symbol", "The input array.")
 
+inline bool MixedUnaryOpType(const nnvm::NodeAttrs& attrs,
+                             std::vector<int>* in_attrs,
+                             std::vector<int>* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  int a_type = in_attrs->at(0);
+  if (mxnet::common::is_float(a_type)) {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, in_attrs->at(0));
+  } else if (a_type == mshadow::kInt32 || a_type == mshadow::kInt64) {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat32);
+  } else {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat16);
+  }
+  return out_attrs->at(0) != -1;
+}
+
+#define MXNET_OPERATOR_REGISTER_NUMPY_MIXED_TYPE_UNARY(__name$, __input_name$, __kernel$) \
+  NNVM_REGISTER_OP(__name$)                                                               \
+  .set_num_inputs(1)                                                                      \
+  .set_num_outputs(1)                                                                     \
+  .set_attr<mxnet::FInferShape>("FInferShape", ElemwiseShape<1, 1>)                       \
+  .set_attr<nnvm::FInferType>("FInferType", MixedUnaryOpType)                             \
+  .set_attr<nnvm::FInplaceOption>("FInplaceOption",                                       \
+    [](const NodeAttrs& attrs){                                                           \
+      return std::vector<std::pair<int, int> >{{0, 0}};                                   \
+    })                                                                                    \
+  .set_attr<nnvm::FListInputNames>("FListInputNames",                                     \
+    [](const NodeAttrs& attrs) {                                                          \
+      return std::vector<std::string>{__input_name$};                                     \
+    })                                                                                    \
+  .set_attr<FCompute>("FCompute<cpu>", UnaryOp::ComputeMixedType<cpu, __kernel$>)         \
+  .add_argument(__input_name$, "NDArray-or-Symbol", "The input array.")
+
+#define MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_IN(name)            \
+  NNVM_REGISTER_OP(name)                                            \
+  .set_num_inputs(2)                                                \
+  .set_num_outputs(1)                                               \
+  .set_attr<nnvm::FListInputNames>("FListInputNames",               \
+    [](const NodeAttrs& attrs) {                                    \
+      return std::vector<std::string>{"lhs", "rhs"};                \
+    })                                                              \
+  .set_attr<mxnet::FInferShape>("FInferShape", ElemwiseShape<2, 1>)  \
+  .set_attr<nnvm::FInplaceOption>("FInplaceOption",                 \
+    [](const NodeAttrs& attrs){                                     \
+      return std::vector<std::pair<int, int> >{{0, 0}, {1, 0}};     \
+    })                                                              \
+  .add_argument("lhs", "NDArray-or-Symbol", "first input")          \
+  .add_argument("rhs", "NDArray-or-Symbol", "second input")
+
+/*! \brief Binary launch */
+#define MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_INOUT(name)         \
+  NNVM_REGISTER_OP(name)                                            \
+  .set_num_inputs(3)                                                \
+  .set_num_outputs(1)                                               \
+  .set_attr<nnvm::FListInputNames>("FListInputNames",               \
+    [](const NodeAttrs& attrs) {                                    \
+      return std::vector<std::string>{"lhs", "rhs"};                \
+    })                                                              \
+  .set_attr<mxnet::FInferShape>("FInferShape", ElemwiseShape<3, 1>)  \
+  .set_attr<nnvm::FInplaceOption>("FInplaceOption",                 \
+    [](const NodeAttrs& attrs){                                     \
+      return std::vector<std::pair<int, int> >{{0, 0}, {1, 0}};     \
+    })                                                              \
+  .add_argument("lhs", "NDArray-or-Symbol", "first input")          \
+  .add_argument("rhs", "NDArray-or-Symbol", "second input")
+
+#define MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_IN_CPU(__name$, __kernel$)                    \
+  MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_IN(__name$)                                         \
+  .set_attr<FCompute>("FCompute<cpu>", ElemwiseBinaryOp::MixedUnaryBackwardUseInCompute<cpu,  \
+                                                                       __kernel$>)            \
+  .set_attr<FResourceRequest>("FResourceRequest",  /* For Sparse CSR */                       \
 
 Review comment:
   seems like `FResourceRequest` is not used, you can simply get rid of this `set_attr`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r365562549
 
 

 ##########
 File path: src/operator/tensor/elemwise_binary_op.h
 ##########
 @@ -525,6 +525,68 @@ class ElemwiseBinaryOp : public OpBase {
     });
   }
 
+  template<typename xpu, typename OP>
+  static void ComputeMixedTypeIn(const nnvm::NodeAttrs &attrs,
+                      const OpContext &ctx,
+                      const std::vector<TBlob> &inputs,
+                      const std::vector<OpReqType> &req,
+                      const std::vector<TBlob> &outputs) {
+    using namespace mxnet_op;
+    if (req[0] == kNullOp) return;
+    Stream<xpu> *s = ctx.get_stream<xpu>();
+    CHECK_EQ(inputs.size(), 2U);
+    CHECK_EQ(outputs.size(), 1U);
+    if (!mxnet::common::is_float(inputs[1].type_flag_)) {
+      LOG(FATAL) << "Operator " << attrs.op->name <<
+                    " does not support type " << inputs[1].type_flag_;
+    }
+    if (outputs[0].type_flag_ == mshadow::kBool) {
+      LOG(FATAL) << "Operator " << attrs.op->name << " does not support boolean type";
+    }
+    MXNET_ASSIGN_REQ_SWITCH(req[0], Req, {
+      MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+        const size_t size = (minthree(outputs[0].Size(), inputs[0].Size(), inputs[1].Size())
+        + DataType<DType>::kLanes - 1) / DataType<DType>::kLanes;
+        if (size != 0) {
+          Kernel<mxnet_op::op_with_req<OP, Req>, xpu>::Launch(s, size,
+          outputs[0].dptr<DType>(),
+          inputs[0].dptr<DType>(), inputs[1].dptr<DType>());
+        }
+      });
+    });
+  }
+
+  template<typename xpu, typename OP>
+  static void ComputeMixedTypeInOut(const nnvm::NodeAttrs &attrs,
+                      const OpContext &ctx,
+                      const std::vector<TBlob> &inputs,
+                      const std::vector<OpReqType> &req,
+                      const std::vector<TBlob> &outputs) {
+    using namespace mxnet_op;
+    if (req[0] == kNullOp) return;
+    Stream<xpu> *s = ctx.get_stream<xpu>();
+    CHECK_EQ(inputs.size(), 3U);
+    CHECK_EQ(outputs.size(), 1U);
+    if (!mxnet::common::is_float(inputs[1].type_flag_)) {
 
 Review comment:
   I think you should directly check if the `output[0]`'s type is an integer type here.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r366778186
 
 

 ##########
 File path: src/operator/tensor/elemwise_binary_op.h
 ##########
 @@ -525,6 +525,67 @@ class ElemwiseBinaryOp : public OpBase {
     });
   }
 
+  template<typename xpu, typename OP>
+  static void MixedUnaryBackwardUseInCompute(const nnvm::NodeAttrs &attrs,
+                                             const OpContext &ctx,
+                                             const std::vector<TBlob> &inputs,
+                                             const std::vector<OpReqType> &req,
+                                             const std::vector<TBlob> &outputs) {
+    using namespace mxnet_op;
+    if (req[0] == kNullOp) return;
+    Stream<xpu> *s = ctx.get_stream<xpu>();
+    CHECK_EQ(inputs.size(), 2U);
+    CHECK_EQ(outputs.size(), 1U);
+    if (!mxnet::common::is_float(inputs[1].type_flag_)) {
+      LOG(FATAL) << "Operator " << attrs.op->name <<
+                    " does not support type " << inputs[1].type_flag_;
+    }
+    if (outputs[0].type_flag_ == mshadow::kBool) {
+      LOG(FATAL) << "Operator " << attrs.op->name << " does not support boolean type";
+    }
+    MXNET_ASSIGN_REQ_SWITCH(req[0], Req, {
+      MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+        const size_t size = (minthree(outputs[0].Size(), inputs[0].Size(), inputs[1].Size())
+        + DataType<DType>::kLanes - 1) / DataType<DType>::kLanes;
+        if (size != 0) {
+          Kernel<mxnet_op::op_with_req<OP, Req>, xpu>::Launch(s, size,
+          outputs[0].dptr<DType>(),
+          inputs[0].dptr<DType>(), inputs[1].dptr<DType>());
+        }
+      });
+    });
+  }
+
+  template<typename xpu, typename OP>
+  static void MixedUnaryBackwardUseInOutCompute(const nnvm::NodeAttrs &attrs,
+                                    const OpContext &ctx,
+                                    const std::vector<TBlob> &inputs,
+                                    const std::vector<OpReqType> &req,
+                                    const std::vector<TBlob> &outputs) {
+    using namespace mxnet_op;
+    if (req[0] == kNullOp) return;
+    Stream<xpu> *s = ctx.get_stream<xpu>();
+    CHECK_EQ(inputs.size(), 3U);
+    CHECK_EQ(outputs.size(), 1U);
+    if (mxnet::common::is_int(outputs[0].type_flag_)) {
+      LOG(FATAL) << "Operator " << attrs.op->name << " does not support int type";
+    }
+    if (outputs[0].type_flag_ == mshadow::kBool) {
 
 Review comment:
   this case and the above case could be merged.
   Plus, the error message is not meaningful enough.
   `mshadow::dtype_string(outputs[0].type_flag_)` will give you the corresponding string that represents the data type.
   Also the problem with showing the op's name is that it will be something in the form of `_backward_npi_xxx`. I think here simply say that "gradient computation for xxx type is not supported" is better. ("xxx" should be the string returned by call to `dtype_string`)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r365556939
 
 

 ##########
 File path: src/operator/tensor/elemwise_binary_op.h
 ##########
 @@ -525,6 +525,68 @@ class ElemwiseBinaryOp : public OpBase {
     });
   }
 
+  template<typename xpu, typename OP>
+  static void ComputeMixedTypeIn(const nnvm::NodeAttrs &attrs,
+                      const OpContext &ctx,
+                      const std::vector<TBlob> &inputs,
+                      const std::vector<OpReqType> &req,
+                      const std::vector<TBlob> &outputs) {
+    using namespace mxnet_op;
+    if (req[0] == kNullOp) return;
+    Stream<xpu> *s = ctx.get_stream<xpu>();
+    CHECK_EQ(inputs.size(), 2U);
+    CHECK_EQ(outputs.size(), 1U);
+    if (!mxnet::common::is_float(inputs[1].type_flag_)) {
+      LOG(FATAL) << "Operator " << attrs.op->name <<
+                    " does not support type " << inputs[1].type_flag_;
+    }
+    if (outputs[0].type_flag_ == mshadow::kBool) {
+      LOG(FATAL) << "Operator " << attrs.op->name << " does not support boolean type";
+    }
+    MXNET_ASSIGN_REQ_SWITCH(req[0], Req, {
+      MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+        const size_t size = (minthree(outputs[0].Size(), inputs[0].Size(), inputs[1].Size())
+        + DataType<DType>::kLanes - 1) / DataType<DType>::kLanes;
+        if (size != 0) {
+          Kernel<mxnet_op::op_with_req<OP, Req>, xpu>::Launch(s, size,
+          outputs[0].dptr<DType>(),
+          inputs[0].dptr<DType>(), inputs[1].dptr<DType>());
+        }
+      });
+    });
+  }
+
+  template<typename xpu, typename OP>
+  static void ComputeMixedTypeInOut(const nnvm::NodeAttrs &attrs,
 
 Review comment:
   `MixedUnaryBackwardUseInOutCompute`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r365556490
 
 

 ##########
 File path: tests/python/unittest/test_numpy_op.py
 ##########
 @@ -1865,15 +1890,53 @@ def hybrid_forward(self, F, a, *args, **kwargs):
         'arccosh' : (lambda x: 1./(x**2 - 1.)**(1./2.), 2.0, 5.0),
         'arctanh' : (lambda x: -1./(x**2 - 1.), -0.99, 0.99)
     }
-    if has_tvm_ops():
-        funcs['rad2deg'] = (lambda x: 180. / _np.pi * _np.ones(x.shape), -1.0, 1.0)
-        funcs['deg2rad'] = (lambda x: _np.pi / 180. * _np.ones(x.shape), -1.0, 1.0)
+
+    dtypes = ['float16', 'float32', 'float64', 'int8', 'uint8', 'int32', 'int64', 'bool']
     ndim = random.choice([2, 3, 4])
-    shape = random.choice([rand_shape_nd(ndim, dim=3), (1, 0, 2)])
-    for shape in [rand_shape_nd(ndim, dim=3), (1, 0, 2)]:
-        for func, func_data in funcs.items():
+    i = random.choice([rand_shape_nd(ndim, dim=3), (1, 0, 2)])
+    shapes = [i for i in [rand_shape_nd(ndim, dim=3), (1, 0, 2)]];
+    for func, func_data in funcs.items():
+        for dtype, shape in itertools.product(dtypes, shapes):
+            rtol = 1e-2 if dtype == np.float16 else 1e-3
+            atol = 1e-4 if dtype == np.float16 else 1e-5
             ref_grad, low, high = func_data
-            check_unary_func(func, ref_grad, shape, low, high)
+            # get rid of warning: divide by zero
+            if((func=='log' or func=='log10' or func=='log2') and
+                (dtype=='int8' or dtype=='uint8' or dtype=='int32' or
+                dtype=='int64')):
+                low = 1
+            if (func=='arctanh' and dtype=='bool'):
+                continue
+            np_func = getattr(_np, func)
+            mx_func = TestUnary2(func)
+            np_test_data = _np.random.uniform(low, high, shape).astype(dtype)
+            mx_test_data = mx.numpy.array(np_test_data)
+            for hybridize in [True, False]:
+                if hybridize:
+                    mx_func.hybridize()
+                if ref_grad:
+                    mx_test_data.attach_grad()
+                np_out = np_func(np_test_data)
+                with mx.autograd.record():
+                    y = mx_func(mx_test_data)
+                assert y.shape == np_out.shape
+                assert_almost_equal(y.asnumpy(), np_out, rtol=1e-3, atol=1e-5)
+                if np_out.dtype == np.bool_:
+                    assert y.dtype == np.bool_
+
+            np_out = getattr(_np, func)(np_test_data)
+            mx_out = getattr(mx.np, func)(mx_test_data)
+            assert mx_out.shape == np_out.shape
+            assert_almost_equal(mx_out.asnumpy(), np_out, rtol=1e-3, atol=1e-5)
+
 
 Review comment:
   Please also add checks for backward computation for floating point input cases.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r381821492
 
 

 ##########
 File path: src/operator/numpy/np_elemwise_unary_op_basic.cc
 ##########
 @@ -82,6 +83,90 @@ NNVM_REGISTER_OP(_np_copy)
   .set_attr<FCompute>("FCompute<cpu>", UnaryOp::Compute<cpu, __kernel$>)                  \
   .add_argument(__input_name$, "NDArray-or-Symbol", "The input array.")
 
+inline bool MixedUnaryOpType(const nnvm::NodeAttrs& attrs,
+                             std::vector<int>* in_attrs,
+                             std::vector<int>* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  int a_type = in_attrs->at(0);
+  if (mxnet::common::is_float(a_type)) {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, in_attrs->at(0));
+  } else if (a_type == mshadow::kInt32 || a_type == mshadow::kInt64) {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat32);
+  } else {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat16);
+  }
+  return out_attrs->at(0) != -1;
+}
+
+#define MXNET_OPERATOR_REGISTER_NUMPY_MIXED_TYPE_UNARY(__name$, __input_name$, __kernel$) \
+  NNVM_REGISTER_OP(__name$)                                                               \
+  .set_num_inputs(1)                                                                      \
+  .set_num_outputs(1)                                                                     \
+  .set_attr<mxnet::FInferShape>("FInferShape", ElemwiseShape<1, 1>)                       \
+  .set_attr<nnvm::FInferType>("FInferType", MixedUnaryOpType)                             \
+  .set_attr<nnvm::FInplaceOption>("FInplaceOption",                                       \
+    [](const NodeAttrs& attrs){                                                           \
+      return std::vector<std::pair<int, int> >{{0, 0}};                                   \
+    })                                                                                    \
+  .set_attr<nnvm::FListInputNames>("FListInputNames",                                     \
+    [](const NodeAttrs& attrs) {                                                          \
+      return std::vector<std::string>{__input_name$};                                     \
+    })                                                                                    \
+  .set_attr<FCompute>("FCompute<cpu>", UnaryOp::ComputeMixedType<cpu, __kernel$>)         \
+  .add_argument(__input_name$, "NDArray-or-Symbol", "The input array.")
+
+#define MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_IN(name)            \
 
 Review comment:
   `MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_BWD_IN` here and `MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_BWD_INOUT` below

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r365557162
 
 

 ##########
 File path: src/operator/tensor/elemwise_binary_op.h
 ##########
 @@ -525,6 +525,68 @@ class ElemwiseBinaryOp : public OpBase {
     });
   }
 
+  template<typename xpu, typename OP>
+  static void ComputeMixedTypeIn(const nnvm::NodeAttrs &attrs,
+                      const OpContext &ctx,
+                      const std::vector<TBlob> &inputs,
+                      const std::vector<OpReqType> &req,
+                      const std::vector<TBlob> &outputs) {
+    using namespace mxnet_op;
+    if (req[0] == kNullOp) return;
+    Stream<xpu> *s = ctx.get_stream<xpu>();
+    CHECK_EQ(inputs.size(), 2U);
+    CHECK_EQ(outputs.size(), 1U);
+    if (!mxnet::common::is_float(inputs[1].type_flag_)) {
+      LOG(FATAL) << "Operator " << attrs.op->name <<
+                    " does not support type " << inputs[1].type_flag_;
+    }
+    if (outputs[0].type_flag_ == mshadow::kBool) {
+      LOG(FATAL) << "Operator " << attrs.op->name << " does not support boolean type";
+    }
+    MXNET_ASSIGN_REQ_SWITCH(req[0], Req, {
+      MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+        const size_t size = (minthree(outputs[0].Size(), inputs[0].Size(), inputs[1].Size())
+        + DataType<DType>::kLanes - 1) / DataType<DType>::kLanes;
+        if (size != 0) {
+          Kernel<mxnet_op::op_with_req<OP, Req>, xpu>::Launch(s, size,
+          outputs[0].dptr<DType>(),
+          inputs[0].dptr<DType>(), inputs[1].dptr<DType>());
+        }
+      });
+    });
+  }
+
+  template<typename xpu, typename OP>
+  static void ComputeMixedTypeInOut(const nnvm::NodeAttrs &attrs,
+                      const OpContext &ctx,
+                      const std::vector<TBlob> &inputs,
+                      const std::vector<OpReqType> &req,
+                      const std::vector<TBlob> &outputs) {
+    using namespace mxnet_op;
+    if (req[0] == kNullOp) return;
+    Stream<xpu> *s = ctx.get_stream<xpu>();
+    CHECK_EQ(inputs.size(), 3U);
+    CHECK_EQ(outputs.size(), 1U);
+    if (!mxnet::common::is_float(inputs[1].type_flag_)) {
+      LOG(FATAL) << "Operator " << attrs.op->name <<
+                    " does not support type " << inputs[1].type_flag_;
+    }
+    if (outputs[0].type_flag_ == mshadow::kBool) {
+      LOG(FATAL) << "Operator " << attrs.op->name << " does not support boolean type";
+    }
+    MXNET_ASSIGN_REQ_SWITCH(req[0], Req, {
+      MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
 
 Review comment:
   use `MXNET_REAL_TYPE_SWITCH` here since you already excluded the integer case.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r365556884
 
 

 ##########
 File path: src/operator/numpy/np_elemwise_unary_op_basic.cc
 ##########
 @@ -82,6 +82,39 @@ NNVM_REGISTER_OP(_np_copy)
   .set_attr<FCompute>("FCompute<cpu>", UnaryOp::Compute<cpu, __kernel$>)                  \
   .add_argument(__input_name$, "NDArray-or-Symbol", "The input array.")
 
+inline bool UnaryOpType(const nnvm::NodeAttrs& attrs,
+                              std::vector<int>* in_attrs,
+                              std::vector<int>* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  int a_type = in_attrs->at(0);
+  if (mxnet::common::is_float(a_type)) {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, in_attrs->at(0));
+  } else if (a_type == mshadow::kInt32 || a_type == mshadow::kInt64) {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat64);
+  } else {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat16);
+  }
+  return out_attrs->at(0) != -1;
+}
+
+#define MXNET_OPERATOR_REGISTER_NUMPY_UNARY_MIXEDTYPE(__name$, __input_name$, __kernel$)            \
 
 Review comment:
   `MXNET_OPERATOR_REGISTER_NUMPY_MIXED_TYPE_UNARY`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r368161953
 
 

 ##########
 File path: src/operator/tensor/elemwise_binary_op.h
 ##########
 @@ -850,6 +950,16 @@ class ElemwiseBinaryOp : public OpBase {
   .set_attr<FCompute>("FCompute<cpu>", ElemwiseBinaryOp::Compute<cpu, __kernel$>)              \
   .set_attr<FComputeEx>("FComputeEx<cpu>", ElemwiseBinaryOp::ComputeEx<cpu, __kernel$>)
 
+#define MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_IN_WITH_SPARSE_CPU_DR(__name$, __kernel$)       \
 
 Review comment:
   `MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_USEIN_BWD_CPU`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r368161953
 
 

 ##########
 File path: src/operator/tensor/elemwise_binary_op.h
 ##########
 @@ -850,6 +950,16 @@ class ElemwiseBinaryOp : public OpBase {
   .set_attr<FCompute>("FCompute<cpu>", ElemwiseBinaryOp::Compute<cpu, __kernel$>)              \
   .set_attr<FComputeEx>("FComputeEx<cpu>", ElemwiseBinaryOp::ComputeEx<cpu, __kernel$>)
 
+#define MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_IN_WITH_SPARSE_CPU_DR(__name$, __kernel$)       \
 
 Review comment:
   `MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_BWD_CPU`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r365556923
 
 

 ##########
 File path: src/operator/tensor/elemwise_binary_op.h
 ##########
 @@ -525,6 +525,68 @@ class ElemwiseBinaryOp : public OpBase {
     });
   }
 
+  template<typename xpu, typename OP>
+  static void ComputeMixedTypeIn(const nnvm::NodeAttrs &attrs,
+                      const OpContext &ctx,
+                      const std::vector<TBlob> &inputs,
+                      const std::vector<OpReqType> &req,
+                      const std::vector<TBlob> &outputs) {
+    using namespace mxnet_op;
+    if (req[0] == kNullOp) return;
+    Stream<xpu> *s = ctx.get_stream<xpu>();
+    CHECK_EQ(inputs.size(), 2U);
+    CHECK_EQ(outputs.size(), 1U);
+    if (!mxnet::common::is_float(inputs[1].type_flag_)) {
+      LOG(FATAL) << "Operator " << attrs.op->name <<
+                    " does not support type " << inputs[1].type_flag_;
+    }
+    if (outputs[0].type_flag_ == mshadow::kBool) {
+      LOG(FATAL) << "Operator " << attrs.op->name << " does not support boolean type";
+    }
+    MXNET_ASSIGN_REQ_SWITCH(req[0], Req, {
+      MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+        const size_t size = (minthree(outputs[0].Size(), inputs[0].Size(), inputs[1].Size())
+        + DataType<DType>::kLanes - 1) / DataType<DType>::kLanes;
+        if (size != 0) {
+          Kernel<mxnet_op::op_with_req<OP, Req>, xpu>::Launch(s, size,
+          outputs[0].dptr<DType>(),
+          inputs[0].dptr<DType>(), inputs[1].dptr<DType>());
+        }
+      });
+    });
+  }
+
+  template<typename xpu, typename OP>
+  static void ComputeMixedTypeInOut(const nnvm::NodeAttrs &attrs,
+                      const OpContext &ctx,
 
 Review comment:
   alignment

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r365556412
 
 

 ##########
 File path: tests/python/unittest/test_numpy_op.py
 ##########
 @@ -1832,6 +1832,36 @@ def hybrid_forward(self, F, a, *args, **kwargs):
 
     funcs = {
         'absolute' : (lambda x: -1. * (x < 0) + (x > 0), -1.0, 1.0),
+        'logical_not' : (None, -1.0, 1.0),
+        'negative' : (lambda x: -1. * _np.ones(x.shape), -1.0, 1.0),
+        'reciprocal' : (lambda x: -1. / (x ** 2), 0.01, 1.0),
+        'sign' : (None, -1.0, 1.0),
+        'square' : (lambda x: 2.0 * x, -1.0, 1.0),
+    }
+    if has_tvm_ops():
+        funcs['rad2deg'] = (lambda x: 180. / _np.pi * _np.ones(x.shape), -1.0, 1.0)
+        funcs['deg2rad'] = (lambda x: _np.pi / 180. * _np.ones(x.shape), -1.0, 1.0)
+    ndim = random.choice([2, 3, 4])
+    shape = random.choice([rand_shape_nd(ndim, dim=3), (1, 0, 2)])
+    for shape in [rand_shape_nd(ndim, dim=3), (1, 0, 2)]:
+        for func, func_data in funcs.items():
+            ref_grad, low, high = func_data
+            check_unary_func(func, ref_grad, shape, low, high)
+
+
+@with_seed()
+@use_np
+def test_np_mixedType_unary_funcs():
+    class TestUnary2(HybridBlock):
 
 Review comment:
   Change the name to `TestMixedUnary`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364603602
 
 

 ##########
 File path: src/operator/numpy/np_elemwise_unary_op_basic.cu
 ##########
 @@ -39,6 +39,10 @@ NNVM_REGISTER_OP(_np_copy)
   NNVM_REGISTER_OP(__name$)                                               \
   .set_attr<FCompute>("FCompute<gpu>", UnaryOp::Compute<gpu, __kernel$>)
 
+#define MXNET_OPERATOR_REGISTER_NUMPY_UNARY_GPU2(__name$, __kernel$)       \
 
 Review comment:
   same here, get a better name for this.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364600122
 
 

 ##########
 File path: src/operator/mxnet_op.h
 ##########
 @@ -759,6 +759,17 @@ struct backward_grad {
   }
 };
 
+template<typename OP, int req>
+struct op_with_req2 {
+  typedef OP Operation;
+
+  /*! \brief input is one tensor */
+  template<typename DType, typename IType>
 
 Review comment:
   better use `OType` and `IType`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364600022
 
 

 ##########
 File path: src/operator/mxnet_op.h
 ##########
 @@ -759,6 +759,17 @@ struct backward_grad {
   }
 };
 
+template<typename OP, int req>
+struct op_with_req2 {
 
 Review comment:
   please get a better name.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r365556838
 
 

 ##########
 File path: src/operator/numpy/np_elemwise_unary_op_basic.cc
 ##########
 @@ -82,6 +82,39 @@ NNVM_REGISTER_OP(_np_copy)
   .set_attr<FCompute>("FCompute<cpu>", UnaryOp::Compute<cpu, __kernel$>)                  \
   .add_argument(__input_name$, "NDArray-or-Symbol", "The input array.")
 
+inline bool UnaryOpType(const nnvm::NodeAttrs& attrs,
 
 Review comment:
   `MixedUnaryOpType`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r365556446
 
 

 ##########
 File path: tests/python/unittest/test_numpy_op.py
 ##########
 @@ -1832,6 +1832,36 @@ def hybrid_forward(self, F, a, *args, **kwargs):
 
     funcs = {
         'absolute' : (lambda x: -1. * (x < 0) + (x > 0), -1.0, 1.0),
+        'logical_not' : (None, -1.0, 1.0),
+        'negative' : (lambda x: -1. * _np.ones(x.shape), -1.0, 1.0),
+        'reciprocal' : (lambda x: -1. / (x ** 2), 0.01, 1.0),
+        'sign' : (None, -1.0, 1.0),
+        'square' : (lambda x: 2.0 * x, -1.0, 1.0),
+    }
+    if has_tvm_ops():
+        funcs['rad2deg'] = (lambda x: 180. / _np.pi * _np.ones(x.shape), -1.0, 1.0)
+        funcs['deg2rad'] = (lambda x: _np.pi / 180. * _np.ones(x.shape), -1.0, 1.0)
 
 Review comment:
   I think both `rad2deg` and `deg2rad` support mixed precision.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364604329
 
 

 ##########
 File path: src/operator/tensor/elemwise_unary_op_trig.cc
 ##########
 @@ -387,6 +409,28 @@ MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_sinh, unary_bwd<msha
       return ret;
     });
 
+MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR2(_backward_npi_sinh,
+                                                   unary_bwd<mshadow_op::sinh_grad>);
+
+/*NNVM_REGISTER_OP(_backward_npi_sinh)
 
 Review comment:
   remove dead code.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r368162144
 
 

 ##########
 File path: src/operator/tensor/elemwise_binary_op.h
 ##########
 @@ -850,6 +950,16 @@ class ElemwiseBinaryOp : public OpBase {
   .set_attr<FCompute>("FCompute<cpu>", ElemwiseBinaryOp::Compute<cpu, __kernel$>)              \
   .set_attr<FComputeEx>("FComputeEx<cpu>", ElemwiseBinaryOp::ComputeEx<cpu, __kernel$>)
 
+#define MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_IN_WITH_SPARSE_CPU_DR(__name$, __kernel$)       \
+  MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_IN(__name$)                                           \
+  .set_attr<FCompute>("FCompute<cpu>", ElemwiseBinaryOp::MixedUnaryBackwardUseInCompute<cpu,    \
+                                                                       __kernel$>)              \
+
+#define MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_INOUT_WITH_SPARSE_CPU_DR(__name$, __kernel$)    \
 
 Review comment:
   `MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_USEINOUT_BWD_CPU`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364604576
 
 

 ##########
 File path: tests/python/unittest/test_numpy_op.py
 ##########
 @@ -1865,15 +1889,54 @@ def hybrid_forward(self, F, a, *args, **kwargs):
         'arccosh' : (lambda x: 1./(x**2 - 1.)**(1./2.), 2.0, 5.0),
         'arctanh' : (lambda x: -1./(x**2 - 1.), -0.99, 0.99)
     }
-    if has_tvm_ops():
-        funcs['rad2deg'] = (lambda x: 180. / _np.pi * _np.ones(x.shape), -1.0, 1.0)
-        funcs['deg2rad'] = (lambda x: _np.pi / 180. * _np.ones(x.shape), -1.0, 1.0)
+
+    dtypes = ['float16', 'float32', 'float64', 'int8', 'uint8', 'int32', 'int64', 'bool']
     ndim = random.choice([2, 3, 4])
-    shape = random.choice([rand_shape_nd(ndim, dim=3), (1, 0, 2)])
-    for shape in [rand_shape_nd(ndim, dim=3), (1, 0, 2)]:
-        for func, func_data in funcs.items():
+    i = random.choice([rand_shape_nd(ndim, dim=3), (1, 0, 2)])
+    shapes = [i for i in [rand_shape_nd(ndim, dim=3), (1, 0, 2)]];
+    for func, func_data in funcs.items():
+        for dtype, shape in itertools.product(dtypes, shapes):
+            rtol = 1e-2 if dtype == np.float16 else 1e-3
+            atol = 1e-4 if dtype == np.float16 else 1e-5
             ref_grad, low, high = func_data
-            check_unary_func(func, ref_grad, shape, low, high)
+            # get rid of warning: divide by zero
+            if((func=='log' or func=='log10' or func=='log2') and
+                (dtype=='int8' or dtype=='uint8' or dtype=='int32' or
+                dtype=='int64')):
+                low = 1
+            if (func=='arctanh' and dtype=='bool'):
+                continue
+            np_func = getattr(_np, func)
+            mx_func = TestUnary2(func)
+            np_test_data = _np.random.uniform(low, high, shape).astype(dtype)
+            mx_test_data = mx.numpy.array(np_test_data)
+            for hybridize in [True, False]:
+                if hybridize:
+                    mx_func.hybridize()
+                if ref_grad:
+                    mx_test_data.attach_grad()
+                np_out = np_func(np_test_data)
+                with mx.autograd.record():
+                    y = mx_func(mx_test_data)
+                assert y.shape == np_out.shape
+                assert_almost_equal(y.asnumpy(), np_out, rtol=1e-3, atol=1e-5)
+                if np_out.dtype == np.bool_:
+                    assert y.dtype == np.bool_
+
+            np_out = getattr(_np, func)(np_test_data)
+            mx_out = getattr(mx.np, func)(mx_test_data)
+            assert mx_out.shape == np_out.shape
+            assert_almost_equal(mx_out.asnumpy(), np_out, rtol=1e-3, atol=1e-5)
+
+            assertRaises(NotImplementedError, getattr(np, func), mx_test_data, where=False)
+            assertRaises(NotImplementedError, getattr(np, func), mx_test_data,  subok=False)
+            assertRaises(NotImplementedError, getattr(np, func), mx_test_data,  dtype=_np.int8)
+            assertRaises(TypeError, getattr(np, func), mx_test_data,  dtype="abcdefg")
+            assertRaises(NotImplementedError, getattr(np, func), mx_test_data,  casting='safe')
+            assertRaises(TypeError, getattr(np, func), mx_test_data,  casting='mxnet')
+            assertRaises(NotImplementedError, getattr(np, func), mx_test_data,  order='C')
+            assertRaises(NotImplementedError, getattr(np, func), mx_test_data,  order='mxnet')
+
 
 Review comment:
   get rid of this blank line.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r366778630
 
 

 ##########
 File path: src/operator/tensor/elemwise_binary_op.h
 ##########
 @@ -525,6 +525,67 @@ class ElemwiseBinaryOp : public OpBase {
     });
   }
 
+  template<typename xpu, typename OP>
+  static void MixedUnaryBackwardUseInCompute(const nnvm::NodeAttrs &attrs,
+                                             const OpContext &ctx,
+                                             const std::vector<TBlob> &inputs,
+                                             const std::vector<OpReqType> &req,
+                                             const std::vector<TBlob> &outputs) {
+    using namespace mxnet_op;
+    if (req[0] == kNullOp) return;
+    Stream<xpu> *s = ctx.get_stream<xpu>();
+    CHECK_EQ(inputs.size(), 2U);
+    CHECK_EQ(outputs.size(), 1U);
+    if (!mxnet::common::is_float(inputs[1].type_flag_)) {
+      LOG(FATAL) << "Operator " << attrs.op->name <<
+                    " does not support type " << inputs[1].type_flag_;
+    }
+    if (outputs[0].type_flag_ == mshadow::kBool) {
+      LOG(FATAL) << "Operator " << attrs.op->name << " does not support boolean type";
 
 Review comment:
   Same for the error message here since `outputs[0].type_flag_` is technically same as `inputs[1].type_flag_`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364603714
 
 

 ##########
 File path: src/operator/tensor/elemwise_binary_op.h
 ##########
 @@ -525,6 +525,66 @@ class ElemwiseBinaryOp : public OpBase {
     });
   }
 
+  template<typename xpu, typename OP>
+  static void Compute2(const nnvm::NodeAttrs &attrs,
 
 Review comment:
   better name for both `Compute2` and `Compute3`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r366773395
 
 

 ##########
 File path: src/operator/tensor/elemwise_binary_op.h
 ##########
 @@ -525,6 +525,67 @@ class ElemwiseBinaryOp : public OpBase {
     });
   }
 
+  template<typename xpu, typename OP>
+  static void MixedUnaryBackwardUseInCompute(const nnvm::NodeAttrs &attrs,
+                                             const OpContext &ctx,
+                                             const std::vector<TBlob> &inputs,
+                                             const std::vector<OpReqType> &req,
+                                             const std::vector<TBlob> &outputs) {
+    using namespace mxnet_op;
+    if (req[0] == kNullOp) return;
+    Stream<xpu> *s = ctx.get_stream<xpu>();
+    CHECK_EQ(inputs.size(), 2U);
+    CHECK_EQ(outputs.size(), 1U);
+    if (!mxnet::common::is_float(inputs[1].type_flag_)) {
+      LOG(FATAL) << "Operator " << attrs.op->name <<
+                    " does not support type " << inputs[1].type_flag_;
+    }
+    if (outputs[0].type_flag_ == mshadow::kBool) {
+      LOG(FATAL) << "Operator " << attrs.op->name << " does not support boolean type";
+    }
+    MXNET_ASSIGN_REQ_SWITCH(req[0], Req, {
+      MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
 
 Review comment:
   `MSHADOW_REAL_TYPE_SWITCH` here since you've masked out integer cases above.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r365556763
 
 

 ##########
 File path: tests/python/unittest/test_numpy_op.py
 ##########
 @@ -1865,15 +1890,53 @@ def hybrid_forward(self, F, a, *args, **kwargs):
         'arccosh' : (lambda x: 1./(x**2 - 1.)**(1./2.), 2.0, 5.0),
         'arctanh' : (lambda x: -1./(x**2 - 1.), -0.99, 0.99)
     }
-    if has_tvm_ops():
-        funcs['rad2deg'] = (lambda x: 180. / _np.pi * _np.ones(x.shape), -1.0, 1.0)
-        funcs['deg2rad'] = (lambda x: _np.pi / 180. * _np.ones(x.shape), -1.0, 1.0)
+
+    dtypes = ['float16', 'float32', 'float64', 'int8', 'uint8', 'int32', 'int64', 'bool']
     ndim = random.choice([2, 3, 4])
-    shape = random.choice([rand_shape_nd(ndim, dim=3), (1, 0, 2)])
-    for shape in [rand_shape_nd(ndim, dim=3), (1, 0, 2)]:
-        for func, func_data in funcs.items():
+    i = random.choice([rand_shape_nd(ndim, dim=3), (1, 0, 2)])
+    shapes = [i for i in [rand_shape_nd(ndim, dim=3), (1, 0, 2)]];
+    for func, func_data in funcs.items():
+        for dtype, shape in itertools.product(dtypes, shapes):
+            rtol = 1e-2 if dtype == np.float16 else 1e-3
+            atol = 1e-4 if dtype == np.float16 else 1e-5
             ref_grad, low, high = func_data
-            check_unary_func(func, ref_grad, shape, low, high)
+            # get rid of warning: divide by zero
+            if((func=='log' or func=='log10' or func=='log2') and
+                (dtype=='int8' or dtype=='uint8' or dtype=='int32' or
+                dtype=='int64')):
+                low = 1
+            if (func=='arctanh' and dtype=='bool'):
+                continue
+            np_func = getattr(_np, func)
+            mx_func = TestUnary2(func)
+            np_test_data = _np.random.uniform(low, high, shape).astype(dtype)
+            mx_test_data = mx.numpy.array(np_test_data)
 
 Review comment:
   Simply use `np.array` instead of `mx.numpy.array`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r366774634
 
 

 ##########
 File path: src/operator/tensor/elemwise_unary_op.h
 ##########
 @@ -252,6 +252,39 @@ class UnaryOp : public OpBase {
     });
   }
 
+  template<typename xpu, typename OP>
+  static void ComputeMixedType(const nnvm::NodeAttrs& attrs,
+                      const OpContext& ctx,
+                      const std::vector<TBlob>& inputs,
+                      const std::vector<OpReqType>& req,
+                      const std::vector<TBlob>& outputs) {
+    mshadow::Stream<xpu> *s = ctx.get_stream<xpu>();
+
+    if (mxnet::common::is_float(inputs[0].type_flag_)) {
+      MSHADOW_REAL_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+        MSHADOW_REAL_TYPE_SWITCH(inputs[0].type_flag_, IType, {
 
 Review comment:
   this is the case when forward computation's input is floating point number, the output will be of the same type as the input, so you can directly call the original `Compute` function above, no need for launching the kernel here.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r366773595
 
 

 ##########
 File path: src/operator/tensor/elemwise_unary_op.h
 ##########
 @@ -252,6 +252,39 @@ class UnaryOp : public OpBase {
     });
   }
 
+  template<typename xpu, typename OP>
+  static void ComputeMixedType(const nnvm::NodeAttrs& attrs,
+                      const OpContext& ctx,
 
 Review comment:
   alignment

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r366773091
 
 

 ##########
 File path: src/operator/tensor/elemwise_binary_op.h
 ##########
 @@ -525,6 +525,67 @@ class ElemwiseBinaryOp : public OpBase {
     });
   }
 
+  template<typename xpu, typename OP>
+  static void MixedUnaryBackwardUseInCompute(const nnvm::NodeAttrs &attrs,
+                                             const OpContext &ctx,
+                                             const std::vector<TBlob> &inputs,
+                                             const std::vector<OpReqType> &req,
+                                             const std::vector<TBlob> &outputs) {
+    using namespace mxnet_op;
+    if (req[0] == kNullOp) return;
+    Stream<xpu> *s = ctx.get_stream<xpu>();
+    CHECK_EQ(inputs.size(), 2U);
+    CHECK_EQ(outputs.size(), 1U);
+    if (!mxnet::common::is_float(inputs[1].type_flag_)) {
+      LOG(FATAL) << "Operator " << attrs.op->name <<
+                    " does not support type " << inputs[1].type_flag_;
+    }
+    if (outputs[0].type_flag_ == mshadow::kBool) {
+      LOG(FATAL) << "Operator " << attrs.op->name << " does not support boolean type";
+    }
+    MXNET_ASSIGN_REQ_SWITCH(req[0], Req, {
+      MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+        const size_t size = (minthree(outputs[0].Size(), inputs[0].Size(), inputs[1].Size())
+        + DataType<DType>::kLanes - 1) / DataType<DType>::kLanes;
+        if (size != 0) {
+          Kernel<mxnet_op::op_with_req<OP, Req>, xpu>::Launch(s, size,
+          outputs[0].dptr<DType>(),
+          inputs[0].dptr<DType>(), inputs[1].dptr<DType>());
+        }
+      });
+    });
+  }
+
+  template<typename xpu, typename OP>
+  static void MixedUnaryBackwardUseInOutCompute(const nnvm::NodeAttrs &attrs,
+                                    const OpContext &ctx,
 
 Review comment:
   alignment

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r368161676
 
 

 ##########
 File path: src/operator/tensor/elemwise_binary_op.h
 ##########
 @@ -826,6 +918,14 @@ class ElemwiseBinaryOp : public OpBase {
     [](const NodeAttrs& attrs) { \
       return std::vector<ResourceRequest>{ResourceRequest::kTempSpace};})
 
+#define MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_IN_WITH_SPARSE_CPU(__name$, __kernel$)        \
 
 Review comment:
   no need for `WITH_SPARSE` since this does not include sparse functionalities.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r365556919
 
 

 ##########
 File path: src/operator/tensor/elemwise_binary_op.h
 ##########
 @@ -525,6 +525,68 @@ class ElemwiseBinaryOp : public OpBase {
     });
   }
 
+  template<typename xpu, typename OP>
+  static void ComputeMixedTypeIn(const nnvm::NodeAttrs &attrs,
+                      const OpContext &ctx,
 
 Review comment:
   alignment

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r381821492
 
 

 ##########
 File path: src/operator/numpy/np_elemwise_unary_op_basic.cc
 ##########
 @@ -82,6 +83,90 @@ NNVM_REGISTER_OP(_np_copy)
   .set_attr<FCompute>("FCompute<cpu>", UnaryOp::Compute<cpu, __kernel$>)                  \
   .add_argument(__input_name$, "NDArray-or-Symbol", "The input array.")
 
+inline bool MixedUnaryOpType(const nnvm::NodeAttrs& attrs,
+                             std::vector<int>* in_attrs,
+                             std::vector<int>* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  int a_type = in_attrs->at(0);
+  if (mxnet::common::is_float(a_type)) {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, in_attrs->at(0));
+  } else if (a_type == mshadow::kInt32 || a_type == mshadow::kInt64) {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat32);
+  } else {
+    TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat16);
+  }
+  return out_attrs->at(0) != -1;
+}
+
+#define MXNET_OPERATOR_REGISTER_NUMPY_MIXED_TYPE_UNARY(__name$, __input_name$, __kernel$) \
+  NNVM_REGISTER_OP(__name$)                                                               \
+  .set_num_inputs(1)                                                                      \
+  .set_num_outputs(1)                                                                     \
+  .set_attr<mxnet::FInferShape>("FInferShape", ElemwiseShape<1, 1>)                       \
+  .set_attr<nnvm::FInferType>("FInferType", MixedUnaryOpType)                             \
+  .set_attr<nnvm::FInplaceOption>("FInplaceOption",                                       \
+    [](const NodeAttrs& attrs){                                                           \
+      return std::vector<std::pair<int, int> >{{0, 0}};                                   \
+    })                                                                                    \
+  .set_attr<nnvm::FListInputNames>("FListInputNames",                                     \
+    [](const NodeAttrs& attrs) {                                                          \
+      return std::vector<std::string>{__input_name$};                                     \
+    })                                                                                    \
+  .set_attr<FCompute>("FCompute<cpu>", UnaryOp::ComputeMixedType<cpu, __kernel$>)         \
+  .add_argument(__input_name$, "NDArray-or-Symbol", "The input array.")
+
+#define MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_IN(name)            \
 
 Review comment:
   `MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_BWD_IN` here and `MXNET_OPERATOR_REGISTER_UNARY_MIXEDTYPE_BWD_INOUT `

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17254: [numpy] change unary infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r365556852
 
 

 ##########
 File path: src/operator/numpy/np_elemwise_unary_op_basic.cc
 ##########
 @@ -82,6 +82,39 @@ NNVM_REGISTER_OP(_np_copy)
   .set_attr<FCompute>("FCompute<cpu>", UnaryOp::Compute<cpu, __kernel$>)                  \
   .add_argument(__input_name$, "NDArray-or-Symbol", "The input array.")
 
+inline bool UnaryOpType(const nnvm::NodeAttrs& attrs,
+                              std::vector<int>* in_attrs,
 
 Review comment:
   pay attention to alignments

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] mxnet-bot commented on issue #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on issue #17254:
URL: https://github.com/apache/incubator-mxnet/pull/17254#issuecomment-618177983


   Jenkins CI successfully triggered : [centos-gpu]


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Yiyan66 commented on issue #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
Yiyan66 commented on issue #17254:
URL: https://github.com/apache/incubator-mxnet/pull/17254#issuecomment-618177958


   @mxnet-bot run ci [centos-gpu]


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on issue #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on issue #17254:
URL: https://github.com/apache/incubator-mxnet/pull/17254#issuecomment-617509451


   Jenkins CI successfully triggered : [unix-gpu]


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Yiyan66 commented on issue #17254: [numpy] change unary infer type

Posted by GitBox <gi...@apache.org>.
Yiyan66 commented on issue #17254:
URL: https://github.com/apache/incubator-mxnet/pull/17254#issuecomment-617509423


   @mxnet-bot run ci [unix-gpu]


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org