You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/01/15 18:39:48 UTC

[GitHub] [incubator-mxnet] CassiniXu opened a new pull request #17328: [numpy] add op pad

CassiniXu opened a new pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328
 
 
   ## Description ##
   add op pad
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments are documented. 
   - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
   - Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be made.
   - Interesting edge cases to note here
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146199
 
 

 ##########
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##########
 @@ -5866,4 +5866,116 @@ def bincount(x, weights=None, minlength=0):
     return _npi.bincount(x, weights=weights, minlength=minlength, has_weights=True)
 
 
+@set_module('mxnet.symbol.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", constant_values=0):
+    """
+    Pad an array.
+    Parameters
+    ----------
+    array : array_like of rank N
+        The array to pad.
+    pad_width : {sequence, array_like, int}
+        Number of values padded to the edges of each axis.
+        ((before_1, after_1), ... (before_N, after_N)) unique pad widths
+        for each axis.
+        ((before, after),) yields same before and after pad for each axis.
+        (pad,) or int is a shortcut for before = after = pad width for all
+        axes.
+    mode : str or function, optional
+        One of the following string values or a user supplied function.
+        'constant' (default)
+            Pads with a constant value.
+        'edge'
+            Pads with the edge values of array.
+        'linear_ramp'
+            not supported yet
+        'maximum'
+            Pads with the maximum value of all of the
+            vector along each axis.
+        'mean'
+            not supported yet
+        'median'
+           not supported yet
+        'minimum'
+            Pads with the minimum value of all of the
+            vector along each axis.
+        'reflect'
+            Pads with the reflection of the vector mirrored on
+            the first and last values of the vector along each
+            axis.
+        'symmetric'
+            Pads with the reflection of the vector mirrored
+            along the edge of the array.
+        'wrap'
+            not supported yet
+        'empty'
+            Pads with undefined values.
+            .. versionadded:: 1.17
+        <function>
+            Padding function, see Notes.
+    stat_length : not supported yet
+    constant_values : scalar, optional
+        Used in 'constant'.  The values to set the padded values for each
+        axis.
+        Default is 0.
+    end_values : not supported yet
+    reflect_type : {'even', 'odd'}, optional
+        only support even now
+    Returns
 
 Review comment:
   extra blank line above

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r378678841
 
 

 ##########
 File path: src/operator/numpy/np_pad_op-inl.h
 ##########
 @@ -0,0 +1,725 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include <vector>
+#include <algorithm>
+#include <string>
+#include <utility>
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template <int ndim, typename DTypeShape>
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape<ndim>& coord,
+                               const DTypeShape* shape) {
+  index_t ret = 0;
+  int nndim = ndim;
+  #pragma unroll
+  for (int i = 0; i < nndim; ++i) {
+    ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+/* Compute coordinates from flattened index given shape */
+template<int ndim, typename DTypeShape>
+MSHADOW_XINLINE mshadow::Shape<ndim> uunravel(const int idx,
+                                              const DTypeShape* shape) {
+  mshadow::Shape<ndim> ret;
+  #pragma unroll
+  for (int i = ndim-1, j = idx; i >=0; --i) {
+    auto tmp = j / shape[i];
+    ret[i] = j - tmp*shape[i];
+    j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter<NumpyPadParam> {
+  mxnet::Tuple<mxnet::Tuple<int>> pad_width;
+  int mode;
+  double constant_value;
+  std::string reflect_type;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+    DMLC_DECLARE_FIELD(pad_width)
+    .describe("Number of values padded to the edges of each axis. "
+              "((before_1, after_1), … (before_N,"
+              "after_N)) unique pad widths for each axis. ((before, after),) "
+              "yields same before and"
+              "after pad for each axis. "
+              "(pad,) or int is a shortcut for before = after = pad width for all"
+              "axes.");
+    DMLC_DECLARE_FIELD(mode)
+    .set_default(1)
+    .describe("str or function, optional");
+    DMLC_DECLARE_FIELD(constant_value)
+    .set_default(0.0)
+    .describe("Used in ‘constant’. The values to set the padded values for each axis."
+              "((before_1, after_1), ... (before_N, after_N)) unique pad constants for"
+              "each axis."
+              "((before, after),) yields same before and after constants for each axis."
+              "(constant,) or constant is a shortcut for before = after = constant for all"
+              "axes."
+              "Default is 0.");
+    DMLC_DECLARE_FIELD(reflect_type)
+    .set_default("even")
+    .describe("Used in ‘reflect’, and ‘symmetric’. "
+              "The ‘even’ style is the default with an unaltered reflection around "
+              "the edge value. For the ‘odd’ style,"
+              "the extended part of the array is created by subtracting the "
+              "reflected values from two times the edge value.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+                                       const mxnet::Tuple<Tuple<int>> pad_width) {
+  if (ishape.ndim() == 1) {
+    auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+    return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+    int i;
+    mxnet::TShape oshape(ishape.ndim(), -1);
+    for (i = ishape.ndim() - 1; i >=0; i--) {
+      int base = ishape[i];
+      base = base + pad_width[i][0] + pad_width[i][1];
+      oshape[i] = base;
+    }
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+template <typename xpu, int req, bool back, int ndim>
+struct constant_pad {
+  template <typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  double constant_value) {
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] && indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        KERNEL_ASSIGN(out[i], req, constant_value);
+      }
+    }
+    if (origin) {
+      for (m = 0; m < ndim; m++) {
+        indexshape[m] = indexshape[m] - indexwidth[m * 2];
+      }
+      index_t l = rravel<ndim>(j, ishape);
+      KERNEL_ASSIGN(out[i], req, a[l]);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct pad_copy {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    // if is origin
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] && indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      for (m = 0; m < ndim; m++) {
+        indexshape[m] = indexshape[m] - indexwidth[m * 2];
+      }
+      int l = rravel<ndim>(j, ishape);
+      KERNEL_ASSIGN(out[i], req, a[l]);
+    } else {
+      return;
+    }
+  }
+};
+
+template <typename xpu, int req, bool bac, int ndim>
+struct symmetric_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] || indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] && indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2]) {
+    // we need to do the assignment
+      int distance = indexwidth[index * 2] - indexshape[index];
+      int total = ishape[index];
+      // the round of this element
+      int round = (distance - 1) / total;
+      int position = distance % total;
+      if (position == 0) {
+        position = ishape[index];
+      }
+      if (round % 2 == 0) {
+        indexshape[index] = indexwidth[index * 2] + position - 1;
+      } else {
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position - 1);
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    } else if (indexshape[index] >= (indexwidth[index * 2]+ishape[index])) {
+      int distance = (indexshape[index]+1) - (indexwidth[index * 2]+ishape[index]);
+      int total = ishape[index];
+      int position = distance % total;
+      int round = (distance - 1) / total;
+      if (position == 0) {
+        position = ishape[index];
+      }
+      if (round % 2 == 0) {
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position - 1);
+      } else {
+        indexshape[index] = indexwidth[index * 2] + position - 1;
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct edge_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+      // we can not do this now, since this is a former axis
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+    // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2]) {
+    // we need to do the assignment
+      indexshape[index] = indexwidth[index * 2];
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    } else if (indexshape[index] >= (indexwidth[index * 2]+ishape[index])) {
+      indexshape[index] = indexwidth[index * 2] + ishape[index] - 1;
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct reflect_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2]) {
+      // we need to do the assignment
+      int distance = indexwidth[index * 2] - indexshape[index];
+      int total = ishape[index];
+      if (total == 1) {
+        indexshape[index] = indexwidth[index * 2];
+        int l = rravel<ndim>(j, oshape);
+        KERNEL_ASSIGN(out[i], req, out[l]);
+        return;
+      }
+      int round = (distance - 1) / (total - 1);
+      if (round % 2 == 0) {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + position;
+      } else {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position);
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    } else if (indexshape[index] >= (indexwidth[index * 2] + ishape[index])) {
+      int distance = (indexshape[index]+1) - (indexwidth[index * 2] + ishape[index]);
+      int total = ishape[index];
+      if (total == 1) {
+        indexshape[index] = indexwidth[index * 2];
+        int l = rravel<ndim>(j, oshape);
+        KERNEL_ASSIGN(out[i], req, out[l]);
+        return;
+      }
+      int round = (distance - 1) / (total - 1);
+      if (round % 2 == 0) {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position);
+      } else {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + position;
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+  }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct max_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+
+    if (indexshape[index] < indexwidth[index * 2] ||
+        indexshape[index] >= indexwidth[index * 2] + ishape[index]) {
+      indexshape[index] = indexwidth[index * 2];
+      int l = rravel<ndim>(j, oshape);
+      int max_count = 0;
+      auto max_value = out[l];
+      for (max_count = 0; max_count < ishape[index]; max_count++) {
+        indexshape[index] = indexwidth[index * 2] + max_count;
+        l = rravel<ndim>(j, oshape);
+        if (out[l] > max_value) {
+            max_value = out[l];
+        }
+      }
+      KERNEL_ASSIGN(out[i], req, max_value);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct min_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2] ||
+        indexshape[index] >= (indexwidth[index * 2] + ishape[index])) {
+      indexshape[index] = indexwidth[index * 2];
+      int l = rravel<ndim>(j, oshape);
+      int min_count = 0;
+      auto min_value = out[l];
+      for (min_count = 0; min_count < ishape[index]; min_count++) {
+        indexshape[index] = indexwidth[index * 2] + min_count;
+        l = rravel<ndim>(j, oshape);
+        if (out[l] < min_value) {
+            min_value = out[l];
+        }
+      }
+      j = uunravel<ndim>(i, oshape);
+      KERNEL_ASSIGN(out[i], req, min_value);
+    } else {
+      return;
+    }
+  }
+};
+
+
+template <typename xpu, int req, bool back>
+struct pad_grad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape *ishape,
+                                  const DTypeShape *oshape){
+    using namespace mxnet_op;
+    KERNEL_ASSIGN(out[i], req, 1);
+  }
+};
+
+template<typename xpu, bool back, typename ShapeDType>
+void NumpyPadOpImpl(const TBlob& in_data,
+                    const TBlob& out_data,
+                    ShapeDType* ishape,
+                    ShapeDType* oshape,
+                    index_t dsize,
+                    const NumpyPadParam& param,
+                    const std::vector<OpReqType>& req,
+                    mxnet_op::Stream<xpu> *s) {
+  using namespace mxnet_op;
+  using namespace mshadow;
+  int mode = param.mode;
+  int ndim = in_data.ndim();
+  MXNET_NDIM_SWITCH(ndim, NDim, {
+    mshadow::Shape<NDim*2> width;
+    int dimcounter = 0;
+    index_t* odptr = reinterpret_cast<index_t*>(oshape);
+    if (ndim == 1) {
+      width[0] = param.pad_width[0][0];
+      width[1] = param.pad_width[1][0];
+    } else {
+      for (dimcounter = 0; dimcounter < NDim; dimcounter++) {
+        width[dimcounter*2] = param.pad_width[dimcounter][0];
+        width[dimcounter*2 + 1] = param.pad_width[dimcounter][1];
+      }
+    }
+    if (!back) {
+      index_t* idptr = reinterpret_cast<index_t*>(ishape);
+      if (mode == 1) {
+      // constant padding start
+        MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+          MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+            Kernel<constant_pad<xpu, req_type, back, NDim>, xpu>::Launch(
+              s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+              idptr, odptr, width, param.constant_value);
+          });
+        });
+      // constant padding end
+      } else {
+        MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+          MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+            Kernel<pad_copy<xpu, req_type, back, NDim>, xpu>::Launch(
+              s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+              idptr, odptr, width);
+          });
+        });
+        index_t index;
+        index_t dim = ndim;
+        if (mode == 2) {
+          // symmetric padding start
+          for (index = dim-1; index >= 0; index--) {
+            MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+              MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+                Kernel<symmetric_pad<xpu, req_type, back, NDim>, xpu>::Launch(
+                  s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+                  idptr, odptr, width, index);
+              });
+            });
+          }
+        } else if (mode == 3) {
+          // edge padding start
+          for (index = dim-1; index >= 0; index--) {
+            MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+              MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+                Kernel<edge_pad<xpu, req_type, back, NDim>, xpu>::Launch(
+                  s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+                  idptr, odptr, width, index);
+              });
+            });
+          }
+        } else if (mode == 4) {
+          // reflect padding start
+          for (index = dim-1; index >= 0; index--) {
+            MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+              MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+                Kernel<reflect_pad<xpu, req_type, back, NDim>, xpu>::Launch(
+                  s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+                  idptr, odptr, width, index);
+              });
+            });
+          }
+        } else if (mode == 5) {
+          for (index = dim-1; index >= 0; index--) {
+            MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+              MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+                Kernel<max_pad<xpu, req_type, back, NDim>, xpu>::Launch(
+                  s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+                  idptr, odptr, width, index);
+              });
+            });
+          }
+        } else if (mode == 6) {
+          for (index = dim-1; index >= 0; index--) {
+            MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+              MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+                Kernel<min_pad<xpu, req_type, back, NDim>, xpu>::Launch(
+                  s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+                  idptr, odptr, width, index);
+              });
+            });
+          }
+        } else {
+          // not support yet
+        }
+      }
+    } else {
+      index_t* idptr = reinterpret_cast<index_t*>(ishape);
+      MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+        MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+          Kernel<pad_grad<xpu, req_type, back>, xpu>::Launch(
+            s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+            idptr, odptr);
+        });
+      });
+    }
+  })
+}
+
+
+template<typename xpu>
+void NumpyPadOpForward(const nnvm::NodeAttrs& attrs,
+                       const OpContext& ctx,
+                       const std::vector<TBlob>& inputs,
+                       const std::vector<OpReqType>& req,
+                       const std::vector<TBlob>& outputs) {
+  MXNET_NDIM_SWITCH(inputs[0].ndim(), NDim, {
+    using namespace mxnet_op;
+    using namespace mshadow;
+    CHECK_EQ(inputs.size(), 1U);
+    CHECK_EQ(outputs.size(), 1U);
+    CHECK_EQ(req.size(), 1U);
+    CHECK_EQ(req[0], kWriteTo);
+    Stream<xpu> *s = ctx.get_stream<xpu>();
+    const TBlob& in_data = inputs[0];
+    const TBlob& out_data = outputs[0];
+    size_t ts = in_data.ndim();
+    size_t count;
+    mshadow::Shape<NDim> inshape;
+    for (count = 0; count < ts; count++) {
+      inshape[count] = static_cast<index_t>((in_data.shape_)[count]);
+    }
+
+    Tensor<xpu, 1, index_t> tsp = ctx.requested[0].
+                                  get_space_typed<xpu, 1, index_t>(Shape1(2*ts), s);
+    Tensor<cpu, 1, index_t> ta(reinterpret_cast<index_t*>(inshape.shape_),
+                               Shape1(ts), ctx.get_stream<cpu>());
+    Tensor<xpu, 1, index_t> ti(reinterpret_cast<index_t*>(tsp.dptr_),
+                               Shape1(ts), ctx.get_stream<xpu>());
+    mshadow::Copy(ti, ta, ctx.get_stream<xpu>());
+
+    mshadow::Shape<NDim> outshape;
+    for (count = 0; count < ts; count++) {
+      outshape[count] = static_cast<index_t>((out_data.shape_)[count]);
+    }
+    index_t* wcp = tsp.dptr_;
+    wcp += ts;
+    Tensor<cpu, 1, index_t> tb(reinterpret_cast<index_t*>(outshape.shape_),
+                               Shape1(ts), ctx.get_stream<cpu>());
+    Tensor<xpu, 1, index_t> to(reinterpret_cast<index_t*>(wcp), Shape1(ts),
+                               ctx.get_stream<xpu>());
+    mshadow::Copy(to, tb, ctx.get_stream<xpu>());
+    const NumpyPadParam& param = nnvm::get<NumpyPadParam>(attrs.parsed);
+
+    index_t* wt = reinterpret_cast<index_t*>(to.dptr_);
+    index_t* wi = reinterpret_cast<index_t*>(ti.dptr_);
+
+    NumpyPadOpImpl<xpu, false, index_t>(in_data, out_data, wi,
+                               wt, out_data.Size(), param, req, s);
+  })
+}
+
+template<typename xpu>
+void NumpyPadOpBackward(const nnvm::NodeAttrs& attrs,
+                        const OpContext& ctx,
+                        const std::vector<TBlob>& inputs,
+                        const std::vector<OpReqType>& req,
+                        const std::vector<TBlob>& outputs) {
+  MXNET_NDIM_SWITCH(inputs[0].ndim(), NDim, {
+    using namespace mxnet_op;
+    using namespace mshadow;
+    CHECK_EQ(inputs.size(), 1U);
+    CHECK_EQ(outputs.size(), 1U);
+    Stream<xpu> *s = ctx.get_stream<xpu>();
+    const TBlob& in_data = inputs[0];
+    const TBlob& out_data = outputs[0];
+    size_t ts = in_data.ndim();
+    size_t count;
+    mshadow::Shape<NDim> inshape;
+    for (count = 0; count < ts; count++) {
+      inshape[count] = static_cast<index_t>((in_data.shape_)[count]);
+    }
+    Tensor<xpu, 1, index_t> tsp = ctx.requested[0].
+                                  get_space_typed<xpu, 1, index_t>(Shape1(2*ts), s);
+    Tensor<cpu, 1, index_t> ta(reinterpret_cast<index_t*>(inshape.shape_),
+                               Shape1(ts), ctx.get_stream<cpu>());
+    Tensor<xpu, 1, index_t> ti(reinterpret_cast<index_t*>(tsp.dptr_),
+                               Shape1(ts), ctx.get_stream<xpu>());
+    mshadow::Copy(ti, ta, ctx.get_stream<xpu>());
+
+    mshadow::Shape<NDim> outshape;
+    for (count = 0; count < ts; count++) {
+      outshape[count] = static_cast<index_t>((out_data.shape_)[count]);
+    }
+    index_t* wcp = tsp.dptr_;
+    wcp += ts;
+    Tensor<cpu, 1, index_t> tb(reinterpret_cast<index_t*>(outshape.shape_),
+                               Shape1(ts), ctx.get_stream<cpu>());
+    Tensor<xpu, 1, index_t> to(reinterpret_cast<index_t*>(wcp), Shape1(ts),
+                               ctx.get_stream<xpu>());
+    mshadow::Copy(to, tb, ctx.get_stream<xpu>());
+    const NumpyPadParam& param = nnvm::get<NumpyPadParam>(attrs.parsed);
+    index_t* wt = reinterpret_cast<index_t*>(to.dptr_);
+    index_t* wi = reinterpret_cast<index_t*>(ti.dptr_);
+
+
+    NumpyPadOpImpl<xpu, true, index_t>(in_data, out_data, wt,
+                               wi, out_data.Size(), param, req, s);
 
 Review comment:
   alignment
   ```c++
       NumpyPadOpImpl<xpu, true, index_t>(in_data, out_data, wt,
                                          wi, out_data.Size(), param, req, s);
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146462
 
 

 ##########
 File path: src/operator/numpy/np_pad_op-inl.h
 ##########
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include <vector>
+#include <algorithm>
+#include <string>
+#include <utility>
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template <size_t ndim, typename xpu>
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape<ndim>& coord,
+                               const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+    ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template<size_t ndim, typename xpu>
+MSHADOW_XINLINE mshadow::Shape<ndim> uunravel(index_t idx,
+                                              const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  mshadow::Shape<ndim> ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+    auto tmp = j / shape[i];
+    ret[i] = j - tmp*shape[i];
+    j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter<NumpyPadParam> {
+  mxnet::Tuple<Tuple<int>> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+    DMLC_DECLARE_FIELD(pad_width)
+        .describe("Number of values padded to the edges of each axis. "
 
 Review comment:
   2-space indentation.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367139190
 
 

 ##########
 File path: python/mxnet/numpy/multiarray.py
 ##########
 @@ -8517,3 +8516,98 @@ def bincount(x, weights=None, minlength=0):
     array([ 0.3,  0.7,  1.1])
     """
     return _mx_nd_np.bincount(x, weights=weights, minlength=minlength)
+
+@set_module('mxnet.NumPy')
+def pad(array, pad_width, mode="constant", constant_values=0, reflect_type="even"):
+    """
+    Pad an array.
+    Parameters
+    ----------
+    array : array_like of rank N
+        The array to pad.
+    pad_width : {sequence, array_like, int}
+        Number of values padded to the edges of each axis.
+        ((before_1, after_1), ... (before_N, after_N)) unique pad widths
+        for each axis.
+        ((before, after),) yields same before and after pad for each axis.
+        (pad,) or int is a shortcut for before = after = pad width for all
+        axes.
+    mode : str or function, optional
+        One of the following string values or a user supplied function.
+        'constant' (default)
+            Pads with a constant value.
+        'edge'
+            Pads with the edge values of array.
+        'linear_ramp'
+            not supported yet
+        'maximum'
+            Pads with the maximum value of all of the
+            vector along each axis.
+        'mean'
+            not supported yet
+        'median'
+           not supported yet
+        'minimum'
+            Pads with the minimum value of all of the
+            vector along each axis.
+        'reflect'
+            Pads with the reflection of the vector mirrored on
+            the first and last values of the vector along each
+            axis.
+        'symmetric'
+            Pads with the reflection of the vector mirrored
+            along the edge of the array.
+        'wrap'
+            not supported yet
+        'empty'
+            Pads with undefined values.
+            .. versionadded:: 1.17
+        <function>
+            Padding function, see Notes.
+    stat_length : not supported yet
+    constant_values : scalar, optional
+        Used in 'constant'.  The values to set the padded values for each
+        axis.
+        Default is 0.
+    end_values : not supported yet
+    reflect_type : {'even', 'odd'}, optional
+        only support even now
+    Returns
+    -------
+    pad : ndarray
+        Padded array of rank equal to `array` with shape increased
+        according to `pad_width`.
+    Examples
 
 Review comment:
   extra blank line above

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146763
 
 

 ##########
 File path: src/operator/numpy/np_pad_op-inl.h
 ##########
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include <vector>
+#include <algorithm>
+#include <string>
+#include <utility>
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template <size_t ndim, typename xpu>
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape<ndim>& coord,
+                               const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+    ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template<size_t ndim, typename xpu>
+MSHADOW_XINLINE mshadow::Shape<ndim> uunravel(index_t idx,
+                                              const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  mshadow::Shape<ndim> ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+    auto tmp = j / shape[i];
+    ret[i] = j - tmp*shape[i];
+    j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter<NumpyPadParam> {
+  mxnet::Tuple<Tuple<int>> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+    DMLC_DECLARE_FIELD(pad_width)
+        .describe("Number of values padded to the edges of each axis. "
+                  "((before_1, after_1), … (before_N,"
+                  "after_N)) unique pad widths for each axis. ((before, after),) "
+                  "yields same before and"
+                  "after pad for each axis. "
+                  "(pad,) or int is a shortcut for before = after = pad width for all"
+                  "axes.");
+    DMLC_DECLARE_FIELD(mode)
+        .set_default(1)
+        .describe("str or function, optional");
+    DMLC_DECLARE_FIELD(reflect_type)
+        .set_default("even")
+        .describe("Used in ‘reflect’, and ‘symmetric’. "
+                  "The ‘even’ style is the default with an unaltered reflection around "
+                  "the edge value. For the ‘odd’ style,"
+                  "the extended part of the array is created by subtracting the "
+                  "reflected values from two times the edge value.");
+    DMLC_DECLARE_FIELD(constant_value)
+        .set_default(0.0)
+        .describe("Used in ‘constant’. The values to set the padded values for each axis."
+                  "((before_1, after_1), ... (before_N, after_N)) unique pad constants for"
+                  "each axis."
+                  "((before, after),) yields same before and after constants for each axis."
+                  "(constant,) or constant is a shortcut for before = after = constant for all"
+                  "axes."
+                  "Default is 0.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+                                       const mxnet::Tuple<Tuple<int>> pad_width) {
+  if (ishape.ndim() == 1) {
+    auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+    return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+    int i;
+    int sshape_number = ishape.ndim();
+    mxnet::TShape oshape(ishape.ndim(), -1);
+    for (i = ishape.ndim() - 1; i >=0; i--) {
+      int base = ishape[i];
+      base = base + pad_width[i][0] + pad_width[i][1];
+      oshape[i] = base;
+    }
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+inline bool NumpyPadOpShape(const nnvm::NodeAttrs& attrs,
+                            mxnet::ShapeVector* in_attrs,
+                            mxnet::ShapeVector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  const mxnet::TShape& ishape = (*in_attrs)[0];
+  if (!mxnet::ndim_is_known(ishape)) {
+    return false;
+  }
+  const NumpyPadParam& param = nnvm::get<NumpyPadParam>(attrs.parsed);
+
+  mxnet::TShape oshape = NumpyPadShapeImpl(ishape, param.pad_width);
+
+  if (shape_is_none(oshape)) {
+    LOG(FATAL) << "Pad does not exist.";
+  }
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, oshape);
+
+  return shape_is_known(out_attrs->at(0));
+}
+
+
+inline bool NumpyPadOpType(const nnvm::NodeAttrs &attrs,
 
 Review comment:
   you can probably use `ElemwiseType<1, 1>` for this op instead of implementing your own

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146521
 
 

 ##########
 File path: src/operator/numpy/np_pad_op-inl.h
 ##########
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include <vector>
+#include <algorithm>
+#include <string>
+#include <utility>
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template <size_t ndim, typename xpu>
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape<ndim>& coord,
+                               const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+    ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template<size_t ndim, typename xpu>
+MSHADOW_XINLINE mshadow::Shape<ndim> uunravel(index_t idx,
+                                              const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  mshadow::Shape<ndim> ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+    auto tmp = j / shape[i];
+    ret[i] = j - tmp*shape[i];
+    j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter<NumpyPadParam> {
+  mxnet::Tuple<Tuple<int>> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+    DMLC_DECLARE_FIELD(pad_width)
+        .describe("Number of values padded to the edges of each axis. "
+                  "((before_1, after_1), … (before_N,"
+                  "after_N)) unique pad widths for each axis. ((before, after),) "
+                  "yields same before and"
+                  "after pad for each axis. "
+                  "(pad,) or int is a shortcut for before = after = pad width for all"
+                  "axes.");
+    DMLC_DECLARE_FIELD(mode)
+        .set_default(1)
+        .describe("str or function, optional");
+    DMLC_DECLARE_FIELD(reflect_type)
+        .set_default("even")
 
 Review comment:
   2-space indentation.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367144014
 
 

 ##########
 File path: python/mxnet/ndarray/numpy/_op.py
 ##########
 @@ -6476,3 +6475,126 @@ def bincount(x, weights=None, minlength=0):
     if weights is None:
         return _npi.bincount(x, minlength=minlength, has_weights=False)
     return _npi.bincount(x, weights=weights, minlength=minlength, has_weights=True)
+
+
+@set_module('mxnet.ndarray.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", constant_values=0):
+    """
+    Pad an array.
+    Parameters
+    ----------
+    array : array_like of rank N
+        The array to pad.
+    pad_width : {sequence, array_like, int}
+        Number of values padded to the edges of each axis.
+        ((before_1, after_1), ... (before_N, after_N)) unique pad widths
+        for each axis.
+        ((before, after),) yields same before and after pad for each axis.
+        (pad,) or int is a shortcut for before = after = pad width for all
+        axes.
+    mode : str or function, optional
+        One of the following string values or a user supplied function.
+        'constant' (default)
+            Pads with a constant value.
+        'edge'
+            Pads with the edge values of array.
+        'linear_ramp'
+            not supported yet
+        'maximum'
+            Pads with the maximum value of all of the
+            vector along each axis.
+        'mean'
+            not supported yet
+        'median'
+           not supported yet
+        'minimum'
+            Pads with the minimum value of all of the
+            vector along each axis.
+        'reflect'
+            Pads with the reflection of the vector mirrored on
+            the first and last values of the vector along each
+            axis.
+        'symmetric'
+            Pads with the reflection of the vector mirrored
+            along the edge of the array.
+        'wrap'
+            not supported yet
+        'empty'
+            Pads with undefined values.
+            .. versionadded:: 1.17
+        <function>
+            Padding function, see Notes.
+    stat_length : not supported yet
+    constant_values : scalar, optional
+        Used in 'constant'.  The values to set the padded values for each
+        axis.
+        Default is 0.
+    end_values : not supported yet
+    reflect_type : {'even', 'odd'}, optional
+        only support even now
+    Returns
+    -------
+    pad : ndarray
+        Padded array of rank equal to `array` with shape increased
+        according to `pad_width`.
+    Examples
 
 Review comment:
   extra blank line above

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146951
 
 

 ##########
 File path: src/operator/numpy/np_pad_op-inl.h
 ##########
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include <vector>
+#include <algorithm>
+#include <string>
+#include <utility>
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template <size_t ndim, typename xpu>
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape<ndim>& coord,
+                               const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+    ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template<size_t ndim, typename xpu>
+MSHADOW_XINLINE mshadow::Shape<ndim> uunravel(index_t idx,
+                                              const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  mshadow::Shape<ndim> ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+    auto tmp = j / shape[i];
+    ret[i] = j - tmp*shape[i];
+    j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter<NumpyPadParam> {
+  mxnet::Tuple<Tuple<int>> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+    DMLC_DECLARE_FIELD(pad_width)
+        .describe("Number of values padded to the edges of each axis. "
+                  "((before_1, after_1), … (before_N,"
+                  "after_N)) unique pad widths for each axis. ((before, after),) "
+                  "yields same before and"
+                  "after pad for each axis. "
+                  "(pad,) or int is a shortcut for before = after = pad width for all"
+                  "axes.");
+    DMLC_DECLARE_FIELD(mode)
+        .set_default(1)
+        .describe("str or function, optional");
+    DMLC_DECLARE_FIELD(reflect_type)
+        .set_default("even")
+        .describe("Used in ‘reflect’, and ‘symmetric’. "
+                  "The ‘even’ style is the default with an unaltered reflection around "
+                  "the edge value. For the ‘odd’ style,"
+                  "the extended part of the array is created by subtracting the "
+                  "reflected values from two times the edge value.");
+    DMLC_DECLARE_FIELD(constant_value)
+        .set_default(0.0)
+        .describe("Used in ‘constant’. The values to set the padded values for each axis."
+                  "((before_1, after_1), ... (before_N, after_N)) unique pad constants for"
+                  "each axis."
+                  "((before, after),) yields same before and after constants for each axis."
+                  "(constant,) or constant is a shortcut for before = after = constant for all"
+                  "axes."
+                  "Default is 0.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+                                       const mxnet::Tuple<Tuple<int>> pad_width) {
+  if (ishape.ndim() == 1) {
+    auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+    return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+    int i;
+    int sshape_number = ishape.ndim();
+    mxnet::TShape oshape(ishape.ndim(), -1);
+    for (i = ishape.ndim() - 1; i >=0; i--) {
+      int base = ishape[i];
+      base = base + pad_width[i][0] + pad_width[i][1];
+      oshape[i] = base;
+    }
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+inline bool NumpyPadOpShape(const nnvm::NodeAttrs& attrs,
+                            mxnet::ShapeVector* in_attrs,
+                            mxnet::ShapeVector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  const mxnet::TShape& ishape = (*in_attrs)[0];
+  if (!mxnet::ndim_is_known(ishape)) {
+    return false;
+  }
+  const NumpyPadParam& param = nnvm::get<NumpyPadParam>(attrs.parsed);
+
+  mxnet::TShape oshape = NumpyPadShapeImpl(ishape, param.pad_width);
+
+  if (shape_is_none(oshape)) {
+    LOG(FATAL) << "Pad does not exist.";
+  }
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, oshape);
+
+  return shape_is_known(out_attrs->at(0));
+}
+
+
+inline bool NumpyPadOpType(const nnvm::NodeAttrs &attrs,
+                           std::vector<int> *in_attrs,
+                           std::vector<int> *out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  TYPE_ASSIGN_CHECK(*out_attrs, 0, (*in_attrs)[0]);
+  TYPE_ASSIGN_CHECK(*in_attrs, 0, (*out_attrs)[0]);
+  return (*out_attrs)[0] != -1;
+}
+
+template <typename xpu, int req, bool back>
+struct constant_pad {
+  template <typename DType>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const mshadow::Tensor<xpu, 1, index_t>& ishape,
+                                  const mshadow::Tensor<xpu, 1, index_t>& oshape,
+                                  mxnet::Tuple<Tuple<int>> pad_width,
+                                  double constant_value,
+                                  size_t ndim) {
+    using namespace mxnet_op;
+    MXNET_NDIM_SWITCH(ndim, NDim, {
+      auto j = uunravel<NDim>(i, oshape);
+      size_t m;
+      bool origin = true;
+      for (m = 0; m < ndim; m++) {
+        if (j[m] >= pad_width[m][0] && j[m] < pad_width[m][0] + ishape[m]) {
+          continue;
+        } else {
+          origin = false;
+          KERNEL_ASSIGN(out[i], req, constant_value);
+        }
+      }
+      if (origin) {
+        for (m = 0; m < ndim; m++) {
+          j[m] = j[m] - pad_width[m][0];
+        }
+        index_t l = rravel<NDim>(j, ishape);
+        KERNEL_ASSIGN(out[i], req, a[l]);
+      }
+    })
+  }
+};
+
+template <typename xpu, int req, bool back>
+struct pad_copy{
 
 Review comment:
   `struct xxx {` for every kernel declaration.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146145
 
 

 ##########
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##########
 @@ -5866,4 +5866,116 @@ def bincount(x, weights=None, minlength=0):
     return _npi.bincount(x, weights=weights, minlength=minlength, has_weights=True)
 
 
+@set_module('mxnet.symbol.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", constant_values=0):
+    """
+    Pad an array.
+    Parameters
+    ----------
+    array : array_like of rank N
+        The array to pad.
+    pad_width : {sequence, array_like, int}
+        Number of values padded to the edges of each axis.
+        ((before_1, after_1), ... (before_N, after_N)) unique pad widths
+        for each axis.
+        ((before, after),) yields same before and after pad for each axis.
+        (pad,) or int is a shortcut for before = after = pad width for all
+        axes.
+    mode : str or function, optional
+        One of the following string values or a user supplied function.
+        'constant' (default)
+            Pads with a constant value.
+        'edge'
+            Pads with the edge values of array.
+        'linear_ramp'
+            not supported yet
+        'maximum'
+            Pads with the maximum value of all of the
+            vector along each axis.
+        'mean'
+            not supported yet
+        'median'
+           not supported yet
+        'minimum'
+            Pads with the minimum value of all of the
+            vector along each axis.
+        'reflect'
+            Pads with the reflection of the vector mirrored on
+            the first and last values of the vector along each
+            axis.
+        'symmetric'
+            Pads with the reflection of the vector mirrored
+            along the edge of the array.
+        'wrap'
+            not supported yet
+        'empty'
+            Pads with undefined values.
+            .. versionadded:: 1.17
+        <function>
+            Padding function, see Notes.
+    stat_length : not supported yet
+    constant_values : scalar, optional
+        Used in 'constant'.  The values to set the padded values for each
+        axis.
+        Default is 0.
+    end_values : not supported yet
+    reflect_type : {'even', 'odd'}, optional
+        only support even now
+    Returns
+    -------
+    pad : ndarray
+        Padded array of rank equal to `array` with shape increased
+        according to `pad_width`.
+    Examples
 
 Review comment:
   No need for examples section for symbolic interface.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146226
 
 

 ##########
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##########
 @@ -5866,4 +5866,116 @@ def bincount(x, weights=None, minlength=0):
     return _npi.bincount(x, weights=weights, minlength=minlength, has_weights=True)
 
 
+@set_module('mxnet.symbol.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", constant_values=0):
+    """
+    Pad an array.
+    Parameters
 
 Review comment:
   extra blank line above

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146496
 
 

 ##########
 File path: src/operator/numpy/np_pad_op-inl.h
 ##########
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include <vector>
+#include <algorithm>
+#include <string>
+#include <utility>
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template <size_t ndim, typename xpu>
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape<ndim>& coord,
+                               const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+    ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template<size_t ndim, typename xpu>
+MSHADOW_XINLINE mshadow::Shape<ndim> uunravel(index_t idx,
+                                              const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  mshadow::Shape<ndim> ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+    auto tmp = j / shape[i];
+    ret[i] = j - tmp*shape[i];
+    j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter<NumpyPadParam> {
+  mxnet::Tuple<Tuple<int>> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+    DMLC_DECLARE_FIELD(pad_width)
+        .describe("Number of values padded to the edges of each axis. "
+                  "((before_1, after_1), … (before_N,"
+                  "after_N)) unique pad widths for each axis. ((before, after),) "
+                  "yields same before and"
+                  "after pad for each axis. "
+                  "(pad,) or int is a shortcut for before = after = pad width for all"
+                  "axes.");
+    DMLC_DECLARE_FIELD(mode)
+        .set_default(1)
 
 Review comment:
   2-space indentation.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367144773
 
 

 ##########
 File path: src/operator/numpy/np_pad_op.cu
 ##########
 @@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op.cu
+ * \brief GPU Implementation of numpy pad operations
+ */
+
+#include "./np_pad_op-inl.h"
+#include "../nn/concat-inl.h"
+
+namespace mxnet {
+namespace op {
+
+NNVM_REGISTER_OP(_npi_pad)
+.set_attr<FCompute>("FCompute<gpu>", NumpyPadOpForward<gpu>)
+.set_attr<FResourceRequest>("FResourceRequest",
+  [](const NodeAttrs& attrs) {
+    return std::vector<ResourceRequest>{ResourceRequest::kTempSpace};
+  });
 
 Review comment:
   ```c++
   NNVM_REGISTER_OP(_npi_pad)
   .set_attr<FCompute>("FCompute<gpu>", NumpyPadOpForward<gpu>);
   ```
   is enough, the `FResourceRequest` is already registered in `.cc` file.
   Same for the op below.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146039
 
 

 ##########
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##########
 @@ -5866,4 +5866,116 @@ def bincount(x, weights=None, minlength=0):
     return _npi.bincount(x, weights=weights, minlength=minlength, has_weights=True)
 
 
+@set_module('mxnet.symbol.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", constant_values=0):
+    """
+    Pad an array.
+    Parameters
+    ----------
+    array : array_like of rank N
+        The array to pad.
+    pad_width : {sequence, array_like, int}
+        Number of values padded to the edges of each axis.
+        ((before_1, after_1), ... (before_N, after_N)) unique pad widths
+        for each axis.
+        ((before, after),) yields same before and after pad for each axis.
+        (pad,) or int is a shortcut for before = after = pad width for all
+        axes.
+    mode : str or function, optional
+        One of the following string values or a user supplied function.
+        'constant' (default)
+            Pads with a constant value.
+        'edge'
+            Pads with the edge values of array.
+        'linear_ramp'
+            not supported yet
+        'maximum'
+            Pads with the maximum value of all of the
+            vector along each axis.
+        'mean'
+            not supported yet
+        'median'
+           not supported yet
+        'minimum'
+            Pads with the minimum value of all of the
+            vector along each axis.
+        'reflect'
+            Pads with the reflection of the vector mirrored on
+            the first and last values of the vector along each
+            axis.
+        'symmetric'
+            Pads with the reflection of the vector mirrored
+            along the edge of the array.
+        'wrap'
+            not supported yet
+        'empty'
+            Pads with undefined values.
+            .. versionadded:: 1.17
+        <function>
+            Padding function, see Notes.
+    stat_length : not supported yet
+    constant_values : scalar, optional
+        Used in 'constant'.  The values to set the padded values for each
+        axis.
+        Default is 0.
+    end_values : not supported yet
+    reflect_type : {'even', 'odd'}, optional
+        only support even now
+    Returns
+    -------
+    pad : ndarray
+        Padded array of rank equal to `array` with shape increased
+        according to `pad_width`.
+    Examples
+    --------
+    >>> a = [1, 2, 3, 4, 5]
+    >>> np.pad(a, (2, 3), 'edge')
+    array([1, 1, 1, ..., 5, 5, 5])
+    >>> np.pad(a, (2, 2), 'maximum')
+    array([5, 5, 1, 2, 3, 4, 5, 5, 5])
+    >>> np.pad(a, (2, 2), 'mean')
+    array([3, 3, 1, 2, 3, 4, 5, 3, 3])
+    >>> a = [[1, 2], [3, 4]]
+    >>> np.pad(a, ((3, 2), (2, 3)), 'minimum')
+    array([[1, 1, 1, 2, 1, 1, 1],
+           [1, 1, 1, 2, 1, 1, 1],
+           [1, 1, 1, 2, 1, 1, 1],
+           [1, 1, 1, 2, 1, 1, 1],
+           [3, 3, 3, 4, 3, 3, 3],
+           [1, 1, 1, 2, 1, 1, 1],
+           [1, 1, 1, 2, 1, 1, 1]])
+    >>> a = [1, 2, 3, 4, 5]
+    >>> np.pad(a, (2, 3), 'reflect')
+    array([3, 2, 1, 2, 3, 4, 5, 4, 3, 2])
+    >>> np.pad(a, (2, 3), 'symmetric')
+    array([2, 1, 1, 2, 3, 4, 5, 5, 4, 3])
+    >>> a = np.arange(6)
+    >>> a = a.reshape((2, 3))
+    >>> np.pad(a, ((2, 2), (2, 2)), pad_with)
+    array([[10, 10, 10, 10, 10, 10, 10],
+           [10, 10, 10, 10, 10, 10, 10],
+           [10, 10,  0,  1,  2, 10, 10],
+           [10, 10,  3,  4,  5, 10, 10],
+           [10, 10, 10, 10, 10, 10, 10],
+           [10, 10, 10, 10, 10, 10, 10]])
+    """
+    if mode == "constant":
+        return _npi.pad(array, pad_width, 1, reflect_type, constant_values)
+    elif mode == "symmetric" and reflect_type == "even":
+        return _npi.pad(array, pad_width, 2, "even", constant_values)
+    elif mode == "edge":
+        return _npi.pad(array, pad_width, 3, reflect_type, constant_values)
+    elif mode == "reflect" and reflect_type == "even":
+        return _npi.pad(array, pad_width, 4, "even", constant_values)
+    elif mode == "empty":
+        pass
+    elif mode == "maximum":
+        return _npi.pad(array, pad_width, 5, "even", constant_values)
+    elif mode == "minimum":
+        return _npi.pad(array, pad_width, 6, "even", constant_values)
+    else:
+        raise ValueError(
+            "didn't support these modes and reflect_types."
+            )
 
 Review comment:
   No need for line wrap here.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146557
 
 

 ##########
 File path: src/operator/numpy/np_pad_op-inl.h
 ##########
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include <vector>
+#include <algorithm>
+#include <string>
+#include <utility>
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template <size_t ndim, typename xpu>
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape<ndim>& coord,
+                               const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+    ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template<size_t ndim, typename xpu>
+MSHADOW_XINLINE mshadow::Shape<ndim> uunravel(index_t idx,
+                                              const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  mshadow::Shape<ndim> ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+    auto tmp = j / shape[i];
+    ret[i] = j - tmp*shape[i];
+    j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter<NumpyPadParam> {
+  mxnet::Tuple<Tuple<int>> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+    DMLC_DECLARE_FIELD(pad_width)
+        .describe("Number of values padded to the edges of each axis. "
+                  "((before_1, after_1), … (before_N,"
+                  "after_N)) unique pad widths for each axis. ((before, after),) "
+                  "yields same before and"
+                  "after pad for each axis. "
+                  "(pad,) or int is a shortcut for before = after = pad width for all"
+                  "axes.");
+    DMLC_DECLARE_FIELD(mode)
+        .set_default(1)
+        .describe("str or function, optional");
+    DMLC_DECLARE_FIELD(reflect_type)
+        .set_default("even")
+        .describe("Used in ‘reflect’, and ‘symmetric’. "
+                  "The ‘even’ style is the default with an unaltered reflection around "
+                  "the edge value. For the ‘odd’ style,"
+                  "the extended part of the array is created by subtracting the "
+                  "reflected values from two times the edge value.");
+    DMLC_DECLARE_FIELD(constant_value)
+        .set_default(0.0)
 
 Review comment:
   2-space indentation.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r378686854
 
 

 ##########
 File path: src/operator/numpy/np_pad_op-inl.h
 ##########
 @@ -0,0 +1,725 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include <vector>
+#include <algorithm>
+#include <string>
+#include <utility>
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template <int ndim, typename DTypeShape>
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape<ndim>& coord,
+                               const DTypeShape* shape) {
+  index_t ret = 0;
+  int nndim = ndim;
+  #pragma unroll
+  for (int i = 0; i < nndim; ++i) {
+    ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+/* Compute coordinates from flattened index given shape */
+template<int ndim, typename DTypeShape>
+MSHADOW_XINLINE mshadow::Shape<ndim> uunravel(const int idx,
+                                              const DTypeShape* shape) {
+  mshadow::Shape<ndim> ret;
+  #pragma unroll
+  for (int i = ndim-1, j = idx; i >=0; --i) {
+    auto tmp = j / shape[i];
+    ret[i] = j - tmp*shape[i];
+    j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter<NumpyPadParam> {
+  mxnet::Tuple<mxnet::Tuple<int>> pad_width;
+  int mode;
+  double constant_value;
+  std::string reflect_type;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+    DMLC_DECLARE_FIELD(pad_width)
+    .describe("Number of values padded to the edges of each axis. "
+              "((before_1, after_1), … (before_N,"
+              "after_N)) unique pad widths for each axis. ((before, after),) "
+              "yields same before and"
+              "after pad for each axis. "
+              "(pad,) or int is a shortcut for before = after = pad width for all"
+              "axes.");
+    DMLC_DECLARE_FIELD(mode)
+    .set_default(1)
+    .describe("str or function, optional");
+    DMLC_DECLARE_FIELD(constant_value)
+    .set_default(0.0)
+    .describe("Used in ‘constant’. The values to set the padded values for each axis."
+              "((before_1, after_1), ... (before_N, after_N)) unique pad constants for"
+              "each axis."
+              "((before, after),) yields same before and after constants for each axis."
+              "(constant,) or constant is a shortcut for before = after = constant for all"
+              "axes."
+              "Default is 0.");
+    DMLC_DECLARE_FIELD(reflect_type)
+    .set_default("even")
+    .describe("Used in ‘reflect’, and ‘symmetric’. "
+              "The ‘even’ style is the default with an unaltered reflection around "
+              "the edge value. For the ‘odd’ style,"
+              "the extended part of the array is created by subtracting the "
+              "reflected values from two times the edge value.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+                                       const mxnet::Tuple<Tuple<int>> pad_width) {
+  if (ishape.ndim() == 1) {
+    auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+    return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+    int i;
+    mxnet::TShape oshape(ishape.ndim(), -1);
+    for (i = ishape.ndim() - 1; i >=0; i--) {
+      int base = ishape[i];
+      base = base + pad_width[i][0] + pad_width[i][1];
+      oshape[i] = base;
+    }
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+template <typename xpu, int req, bool back, int ndim>
+struct constant_pad {
+  template <typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  double constant_value) {
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] && indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        KERNEL_ASSIGN(out[i], req, constant_value);
+      }
+    }
+    if (origin) {
+      for (m = 0; m < ndim; m++) {
+        indexshape[m] = indexshape[m] - indexwidth[m * 2];
+      }
+      index_t l = rravel<ndim>(j, ishape);
+      KERNEL_ASSIGN(out[i], req, a[l]);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct pad_copy {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    // if is origin
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] && indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      for (m = 0; m < ndim; m++) {
+        indexshape[m] = indexshape[m] - indexwidth[m * 2];
+      }
+      int l = rravel<ndim>(j, ishape);
+      KERNEL_ASSIGN(out[i], req, a[l]);
+    } else {
+      return;
+    }
+  }
+};
+
+template <typename xpu, int req, bool bac, int ndim>
+struct symmetric_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] || indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] && indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2]) {
+    // we need to do the assignment
+      int distance = indexwidth[index * 2] - indexshape[index];
+      int total = ishape[index];
+      // the round of this element
+      int round = (distance - 1) / total;
+      int position = distance % total;
+      if (position == 0) {
+        position = ishape[index];
+      }
+      if (round % 2 == 0) {
+        indexshape[index] = indexwidth[index * 2] + position - 1;
+      } else {
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position - 1);
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    } else if (indexshape[index] >= (indexwidth[index * 2]+ishape[index])) {
+      int distance = (indexshape[index]+1) - (indexwidth[index * 2]+ishape[index]);
+      int total = ishape[index];
+      int position = distance % total;
+      int round = (distance - 1) / total;
+      if (position == 0) {
+        position = ishape[index];
+      }
+      if (round % 2 == 0) {
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position - 1);
+      } else {
+        indexshape[index] = indexwidth[index * 2] + position - 1;
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct edge_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+      // we can not do this now, since this is a former axis
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+    // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2]) {
+    // we need to do the assignment
+      indexshape[index] = indexwidth[index * 2];
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    } else if (indexshape[index] >= (indexwidth[index * 2]+ishape[index])) {
+      indexshape[index] = indexwidth[index * 2] + ishape[index] - 1;
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct reflect_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2]) {
+      // we need to do the assignment
+      int distance = indexwidth[index * 2] - indexshape[index];
+      int total = ishape[index];
+      if (total == 1) {
+        indexshape[index] = indexwidth[index * 2];
+        int l = rravel<ndim>(j, oshape);
+        KERNEL_ASSIGN(out[i], req, out[l]);
+        return;
+      }
+      int round = (distance - 1) / (total - 1);
+      if (round % 2 == 0) {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + position;
+      } else {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position);
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    } else if (indexshape[index] >= (indexwidth[index * 2] + ishape[index])) {
+      int distance = (indexshape[index]+1) - (indexwidth[index * 2] + ishape[index]);
+      int total = ishape[index];
+      if (total == 1) {
+        indexshape[index] = indexwidth[index * 2];
+        int l = rravel<ndim>(j, oshape);
+        KERNEL_ASSIGN(out[i], req, out[l]);
+        return;
+      }
+      int round = (distance - 1) / (total - 1);
+      if (round % 2 == 0) {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position);
+      } else {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + position;
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+  }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct max_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+
+    if (indexshape[index] < indexwidth[index * 2] ||
+        indexshape[index] >= indexwidth[index * 2] + ishape[index]) {
+      indexshape[index] = indexwidth[index * 2];
+      int l = rravel<ndim>(j, oshape);
+      int max_count = 0;
+      auto max_value = out[l];
+      for (max_count = 0; max_count < ishape[index]; max_count++) {
+        indexshape[index] = indexwidth[index * 2] + max_count;
+        l = rravel<ndim>(j, oshape);
+        if (out[l] > max_value) {
+            max_value = out[l];
+        }
+      }
+      KERNEL_ASSIGN(out[i], req, max_value);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct min_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2] ||
+        indexshape[index] >= (indexwidth[index * 2] + ishape[index])) {
+      indexshape[index] = indexwidth[index * 2];
+      int l = rravel<ndim>(j, oshape);
+      int min_count = 0;
+      auto min_value = out[l];
+      for (min_count = 0; min_count < ishape[index]; min_count++) {
+        indexshape[index] = indexwidth[index * 2] + min_count;
+        l = rravel<ndim>(j, oshape);
+        if (out[l] < min_value) {
+            min_value = out[l];
+        }
+      }
+      j = uunravel<ndim>(i, oshape);
+      KERNEL_ASSIGN(out[i], req, min_value);
+    } else {
+      return;
+    }
+  }
+};
+
+
+template <typename xpu, int req, bool back>
+struct pad_grad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape *ishape,
+                                  const DTypeShape *oshape){
+    using namespace mxnet_op;
+    KERNEL_ASSIGN(out[i], req, 1);
+  }
+};
+
+template<typename xpu, bool back, typename ShapeDType>
+void NumpyPadOpImpl(const TBlob& in_data,
+                    const TBlob& out_data,
+                    ShapeDType* ishape,
+                    ShapeDType* oshape,
+                    index_t dsize,
+                    const NumpyPadParam& param,
+                    const std::vector<OpReqType>& req,
+                    mxnet_op::Stream<xpu> *s) {
+  using namespace mxnet_op;
+  using namespace mshadow;
+  int mode = param.mode;
+  int ndim = in_data.ndim();
+  MXNET_NDIM_SWITCH(ndim, NDim, {
+    mshadow::Shape<NDim*2> width;
+    int dimcounter = 0;
+    index_t* odptr = reinterpret_cast<index_t*>(oshape);
+    if (ndim == 1) {
+      width[0] = param.pad_width[0][0];
+      width[1] = param.pad_width[1][0];
+    } else {
+      for (dimcounter = 0; dimcounter < NDim; dimcounter++) {
+        width[dimcounter*2] = param.pad_width[dimcounter][0];
+        width[dimcounter*2 + 1] = param.pad_width[dimcounter][1];
+      }
+    }
+    if (!back) {
+      index_t* idptr = reinterpret_cast<index_t*>(ishape);
+      if (mode == 1) {
+      // constant padding start
+        MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
 
 Review comment:
   change all `MSHADOW_TYPE_SWITCH` to `MSHADOW_TYPE_SWITCH_WITH_BOOL`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367138993
 
 

 ##########
 File path: python/mxnet/numpy/multiarray.py
 ##########
 @@ -8517,3 +8516,98 @@ def bincount(x, weights=None, minlength=0):
     array([ 0.3,  0.7,  1.1])
     """
     return _mx_nd_np.bincount(x, weights=weights, minlength=minlength)
+
+@set_module('mxnet.NumPy')
+def pad(array, pad_width, mode="constant", constant_values=0, reflect_type="even"):
+    """
+    Pad an array.
+    Parameters
+    ----------
+    array : array_like of rank N
+        The array to pad.
+    pad_width : {sequence, array_like, int}
+        Number of values padded to the edges of each axis.
+        ((before_1, after_1), ... (before_N, after_N)) unique pad widths
+        for each axis.
+        ((before, after),) yields same before and after pad for each axis.
+        (pad,) or int is a shortcut for before = after = pad width for all
+        axes.
+    mode : str or function, optional
+        One of the following string values or a user supplied function.
+        'constant' (default)
+            Pads with a constant value.
+        'edge'
+            Pads with the edge values of array.
+        'linear_ramp'
+            not supported yet
+        'maximum'
+            Pads with the maximum value of all of the
+            vector along each axis.
+        'mean'
+            not supported yet
+        'median'
+           not supported yet
+        'minimum'
+            Pads with the minimum value of all of the
+            vector along each axis.
+        'reflect'
+            Pads with the reflection of the vector mirrored on
+            the first and last values of the vector along each
+            axis.
+        'symmetric'
+            Pads with the reflection of the vector mirrored
+            along the edge of the array.
+        'wrap'
+            not supported yet
+        'empty'
+            Pads with undefined values.
+            .. versionadded:: 1.17
+        <function>
+            Padding function, see Notes.
+    stat_length : not supported yet
+    constant_values : scalar, optional
+        Used in 'constant'.  The values to set the padded values for each
+        axis.
+        Default is 0.
+    end_values : not supported yet
+    reflect_type : {'even', 'odd'}, optional
+        only support even now
+    Returns
 
 Review comment:
   extra blank line above

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367139109
 
 

 ##########
 File path: python/mxnet/numpy/multiarray.py
 ##########
 @@ -8467,7 +8467,6 @@ def where(condition, x=None, y=None):
     """
     return _mx_nd_np.where(condition, x, y)
 
-
 
 Review comment:
   recover this blank line

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r378723574
 
 

 ##########
 File path: python/mxnet/ndarray/numpy/_op.py
 ##########
 @@ -6877,3 +6876,132 @@ def bincount(x, weights=None, minlength=0):
     if weights is None:
         return _npi.bincount(x, minlength=minlength, has_weights=False)
     return _npi.bincount(x, weights=weights, minlength=minlength, has_weights=True)
+
+
+@set_module('mxnet.ndarray.numpy')
+def pad(x, pad_width=None, mode="constant", stat_length=None, constant_values=0, end_values=0, reflect_type="even"): # pylint: disable=too-many-arguments
+    """
+    Pad an array.
+
+    Parameters
+    ----------
+    array : array_like of rank N
+        The array to pad.
+    pad_width : {sequence, array_like, int}
+        Number of values padded to the edges of each axis.
+        ((before_1, after_1), ... (before_N, after_N)) unique pad widths
+        for each axis.
+        ((before, after),) yields same before and after pad for each axis.
+        (pad,) or int is a shortcut for before = after = pad width for all
+        axes.
+    mode : str or function, optional
+        One of the following string values or a user supplied function.
+        'constant' (default)
+            Pads with a constant value.
+        'edge'
+            Pads with the edge values of array.
+        'linear_ramp'
+            not supported yet
+        'maximum'
+            Pads with the maximum value of all of the
+            vector along each axis.
+        'mean'
+            not supported yet
+        'median'
+           not supported yet
+        'minimum'
+            Pads with the minimum value of all of the
+            vector along each axis.
+        'reflect'
+            Pads with the reflection of the vector mirrored on
+            the first and last values of the vector along each
+            axis.
+        'symmetric'
+            Pads with the reflection of the vector mirrored
+            along the edge of the array.
+        'wrap'
+            not supported yet
+        'empty'
+            Pads with undefined values.
+            .. versionadded:: 1.17
+        <function>
 
 Review comment:
   we don't support this mode so we should remove it from the doc. Please also check other parts of the docs to accurately reflect our implementation

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r378686154
 
 

 ##########
 File path: src/operator/numpy/np_pad_op-inl.h
 ##########
 @@ -0,0 +1,725 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include <vector>
+#include <algorithm>
+#include <string>
+#include <utility>
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template <int ndim, typename DTypeShape>
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape<ndim>& coord,
+                               const DTypeShape* shape) {
+  index_t ret = 0;
+  int nndim = ndim;
+  #pragma unroll
+  for (int i = 0; i < nndim; ++i) {
+    ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+/* Compute coordinates from flattened index given shape */
+template<int ndim, typename DTypeShape>
+MSHADOW_XINLINE mshadow::Shape<ndim> uunravel(const int idx,
+                                              const DTypeShape* shape) {
+  mshadow::Shape<ndim> ret;
+  #pragma unroll
+  for (int i = ndim-1, j = idx; i >=0; --i) {
+    auto tmp = j / shape[i];
+    ret[i] = j - tmp*shape[i];
+    j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter<NumpyPadParam> {
+  mxnet::Tuple<mxnet::Tuple<int>> pad_width;
+  int mode;
+  double constant_value;
+  std::string reflect_type;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+    DMLC_DECLARE_FIELD(pad_width)
+    .describe("Number of values padded to the edges of each axis. "
+              "((before_1, after_1), … (before_N,"
+              "after_N)) unique pad widths for each axis. ((before, after),) "
+              "yields same before and"
+              "after pad for each axis. "
+              "(pad,) or int is a shortcut for before = after = pad width for all"
+              "axes.");
+    DMLC_DECLARE_FIELD(mode)
+    .set_default(1)
+    .describe("str or function, optional");
+    DMLC_DECLARE_FIELD(constant_value)
+    .set_default(0.0)
+    .describe("Used in ‘constant’. The values to set the padded values for each axis."
+              "((before_1, after_1), ... (before_N, after_N)) unique pad constants for"
+              "each axis."
+              "((before, after),) yields same before and after constants for each axis."
+              "(constant,) or constant is a shortcut for before = after = constant for all"
+              "axes."
+              "Default is 0.");
+    DMLC_DECLARE_FIELD(reflect_type)
+    .set_default("even")
+    .describe("Used in ‘reflect’, and ‘symmetric’. "
+              "The ‘even’ style is the default with an unaltered reflection around "
+              "the edge value. For the ‘odd’ style,"
+              "the extended part of the array is created by subtracting the "
+              "reflected values from two times the edge value.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+                                       const mxnet::Tuple<Tuple<int>> pad_width) {
+  if (ishape.ndim() == 1) {
+    auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+    return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+    int i;
+    mxnet::TShape oshape(ishape.ndim(), -1);
+    for (i = ishape.ndim() - 1; i >=0; i--) {
+      int base = ishape[i];
+      base = base + pad_width[i][0] + pad_width[i][1];
+      oshape[i] = base;
+    }
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+template <typename xpu, int req, bool back, int ndim>
+struct constant_pad {
+  template <typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  double constant_value) {
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] && indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        KERNEL_ASSIGN(out[i], req, constant_value);
+      }
+    }
+    if (origin) {
+      for (m = 0; m < ndim; m++) {
+        indexshape[m] = indexshape[m] - indexwidth[m * 2];
+      }
+      index_t l = rravel<ndim>(j, ishape);
+      KERNEL_ASSIGN(out[i], req, a[l]);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct pad_copy {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    // if is origin
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] && indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      for (m = 0; m < ndim; m++) {
+        indexshape[m] = indexshape[m] - indexwidth[m * 2];
+      }
+      int l = rravel<ndim>(j, ishape);
+      KERNEL_ASSIGN(out[i], req, a[l]);
+    } else {
+      return;
+    }
+  }
+};
+
+template <typename xpu, int req, bool bac, int ndim>
+struct symmetric_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] || indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] && indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2]) {
+    // we need to do the assignment
+      int distance = indexwidth[index * 2] - indexshape[index];
+      int total = ishape[index];
+      // the round of this element
+      int round = (distance - 1) / total;
+      int position = distance % total;
+      if (position == 0) {
+        position = ishape[index];
+      }
+      if (round % 2 == 0) {
+        indexshape[index] = indexwidth[index * 2] + position - 1;
+      } else {
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position - 1);
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    } else if (indexshape[index] >= (indexwidth[index * 2]+ishape[index])) {
+      int distance = (indexshape[index]+1) - (indexwidth[index * 2]+ishape[index]);
+      int total = ishape[index];
+      int position = distance % total;
+      int round = (distance - 1) / total;
+      if (position == 0) {
+        position = ishape[index];
+      }
+      if (round % 2 == 0) {
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position - 1);
+      } else {
+        indexshape[index] = indexwidth[index * 2] + position - 1;
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct edge_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+      // we can not do this now, since this is a former axis
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+    // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2]) {
+    // we need to do the assignment
+      indexshape[index] = indexwidth[index * 2];
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    } else if (indexshape[index] >= (indexwidth[index * 2]+ishape[index])) {
+      indexshape[index] = indexwidth[index * 2] + ishape[index] - 1;
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct reflect_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2]) {
+      // we need to do the assignment
+      int distance = indexwidth[index * 2] - indexshape[index];
+      int total = ishape[index];
+      if (total == 1) {
+        indexshape[index] = indexwidth[index * 2];
+        int l = rravel<ndim>(j, oshape);
+        KERNEL_ASSIGN(out[i], req, out[l]);
+        return;
+      }
+      int round = (distance - 1) / (total - 1);
+      if (round % 2 == 0) {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + position;
+      } else {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position);
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    } else if (indexshape[index] >= (indexwidth[index * 2] + ishape[index])) {
+      int distance = (indexshape[index]+1) - (indexwidth[index * 2] + ishape[index]);
+      int total = ishape[index];
+      if (total == 1) {
+        indexshape[index] = indexwidth[index * 2];
+        int l = rravel<ndim>(j, oshape);
+        KERNEL_ASSIGN(out[i], req, out[l]);
+        return;
+      }
+      int round = (distance - 1) / (total - 1);
+      if (round % 2 == 0) {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position);
+      } else {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + position;
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+  }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct max_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+
+    if (indexshape[index] < indexwidth[index * 2] ||
+        indexshape[index] >= indexwidth[index * 2] + ishape[index]) {
+      indexshape[index] = indexwidth[index * 2];
+      int l = rravel<ndim>(j, oshape);
+      int max_count = 0;
+      auto max_value = out[l];
+      for (max_count = 0; max_count < ishape[index]; max_count++) {
+        indexshape[index] = indexwidth[index * 2] + max_count;
+        l = rravel<ndim>(j, oshape);
+        if (out[l] > max_value) {
+            max_value = out[l];
+        }
+      }
+      KERNEL_ASSIGN(out[i], req, max_value);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct min_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2] ||
+        indexshape[index] >= (indexwidth[index * 2] + ishape[index])) {
+      indexshape[index] = indexwidth[index * 2];
+      int l = rravel<ndim>(j, oshape);
+      int min_count = 0;
+      auto min_value = out[l];
+      for (min_count = 0; min_count < ishape[index]; min_count++) {
+        indexshape[index] = indexwidth[index * 2] + min_count;
+        l = rravel<ndim>(j, oshape);
+        if (out[l] < min_value) {
+            min_value = out[l];
+        }
+      }
+      j = uunravel<ndim>(i, oshape);
+      KERNEL_ASSIGN(out[i], req, min_value);
+    } else {
+      return;
+    }
+  }
+};
+
+
+template <typename xpu, int req, bool back>
+struct pad_grad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape *ishape,
+                                  const DTypeShape *oshape){
+    using namespace mxnet_op;
+    KERNEL_ASSIGN(out[i], req, 1);
+  }
+};
+
+template<typename xpu, bool back, typename ShapeDType>
+void NumpyPadOpImpl(const TBlob& in_data,
+                    const TBlob& out_data,
+                    ShapeDType* ishape,
 
 Review comment:
   get rid of the `ShapeDType` template, and directly use `index_t*` here instead

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367147044
 
 

 ##########
 File path: src/operator/numpy/np_pad_op-inl.h
 ##########
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include <vector>
+#include <algorithm>
+#include <string>
+#include <utility>
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template <size_t ndim, typename xpu>
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape<ndim>& coord,
+                               const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+    ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template<size_t ndim, typename xpu>
+MSHADOW_XINLINE mshadow::Shape<ndim> uunravel(index_t idx,
+                                              const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  mshadow::Shape<ndim> ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+    auto tmp = j / shape[i];
+    ret[i] = j - tmp*shape[i];
+    j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter<NumpyPadParam> {
+  mxnet::Tuple<Tuple<int>> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+    DMLC_DECLARE_FIELD(pad_width)
+        .describe("Number of values padded to the edges of each axis. "
+                  "((before_1, after_1), … (before_N,"
+                  "after_N)) unique pad widths for each axis. ((before, after),) "
+                  "yields same before and"
+                  "after pad for each axis. "
+                  "(pad,) or int is a shortcut for before = after = pad width for all"
+                  "axes.");
+    DMLC_DECLARE_FIELD(mode)
+        .set_default(1)
+        .describe("str or function, optional");
+    DMLC_DECLARE_FIELD(reflect_type)
+        .set_default("even")
+        .describe("Used in ‘reflect’, and ‘symmetric’. "
+                  "The ‘even’ style is the default with an unaltered reflection around "
+                  "the edge value. For the ‘odd’ style,"
+                  "the extended part of the array is created by subtracting the "
+                  "reflected values from two times the edge value.");
+    DMLC_DECLARE_FIELD(constant_value)
+        .set_default(0.0)
+        .describe("Used in ‘constant’. The values to set the padded values for each axis."
+                  "((before_1, after_1), ... (before_N, after_N)) unique pad constants for"
+                  "each axis."
+                  "((before, after),) yields same before and after constants for each axis."
+                  "(constant,) or constant is a shortcut for before = after = constant for all"
+                  "axes."
+                  "Default is 0.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+                                       const mxnet::Tuple<Tuple<int>> pad_width) {
+  if (ishape.ndim() == 1) {
+    auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+    return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+    int i;
+    int sshape_number = ishape.ndim();
+    mxnet::TShape oshape(ishape.ndim(), -1);
+    for (i = ishape.ndim() - 1; i >=0; i--) {
+      int base = ishape[i];
+      base = base + pad_width[i][0] + pad_width[i][1];
+      oshape[i] = base;
+    }
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+inline bool NumpyPadOpShape(const nnvm::NodeAttrs& attrs,
+                            mxnet::ShapeVector* in_attrs,
+                            mxnet::ShapeVector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  const mxnet::TShape& ishape = (*in_attrs)[0];
+  if (!mxnet::ndim_is_known(ishape)) {
+    return false;
+  }
+  const NumpyPadParam& param = nnvm::get<NumpyPadParam>(attrs.parsed);
+
+  mxnet::TShape oshape = NumpyPadShapeImpl(ishape, param.pad_width);
+
+  if (shape_is_none(oshape)) {
+    LOG(FATAL) << "Pad does not exist.";
+  }
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, oshape);
+
+  return shape_is_known(out_attrs->at(0));
+}
+
+
+inline bool NumpyPadOpType(const nnvm::NodeAttrs &attrs,
+                           std::vector<int> *in_attrs,
+                           std::vector<int> *out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  TYPE_ASSIGN_CHECK(*out_attrs, 0, (*in_attrs)[0]);
+  TYPE_ASSIGN_CHECK(*in_attrs, 0, (*out_attrs)[0]);
+  return (*out_attrs)[0] != -1;
+}
+
+template <typename xpu, int req, bool back>
+struct constant_pad {
+  template <typename DType>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const mshadow::Tensor<xpu, 1, index_t>& ishape,
+                                  const mshadow::Tensor<xpu, 1, index_t>& oshape,
+                                  mxnet::Tuple<Tuple<int>> pad_width,
+                                  double constant_value,
+                                  size_t ndim) {
+    using namespace mxnet_op;
+    MXNET_NDIM_SWITCH(ndim, NDim, {
+      auto j = uunravel<NDim>(i, oshape);
+      size_t m;
+      bool origin = true;
+      for (m = 0; m < ndim; m++) {
+        if (j[m] >= pad_width[m][0] && j[m] < pad_width[m][0] + ishape[m]) {
+          continue;
+        } else {
+          origin = false;
+          KERNEL_ASSIGN(out[i], req, constant_value);
+        }
+      }
+      if (origin) {
+        for (m = 0; m < ndim; m++) {
+          j[m] = j[m] - pad_width[m][0];
+        }
+        index_t l = rravel<NDim>(j, ishape);
+        KERNEL_ASSIGN(out[i], req, a[l]);
+      }
+    })
+  }
+};
+
+template <typename xpu, int req, bool back>
+struct pad_copy{
+  template<typename DType>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const mshadow::Tensor<xpu, 1, index_t>& ishape,
+                                  const mshadow::Tensor<xpu, 1, index_t>& oshape,
+                                  mxnet::Tuple<Tuple<int>> pad_width,
+                                  size_t ndim){
+    using namespace mxnet_op;
+    MXNET_NDIM_SWITCH(ndim, NDim, {
+      auto j = uunravel<NDim>(i, oshape);
+      size_t m;
+      bool origin = true;
+      // if is origin
+      for (m = 0; m < ndim; m++) {
+        if (j[m] >= pad_width[m][0] && j[m] < pad_width[m][0] + ishape[m]) {
+          continue;
+        } else {
+          origin = false;
+          break;
+        }
+      }
+      if (origin) {
+        for (m = 0; m < ndim; m++) {
+          j[m] = j[m] - pad_width[m][0];
+        }
+        int l = rravel<NDim>(j, ishape);
+        KERNEL_ASSIGN(out[i], req, a[l]);
+      } else {
+        return;
+      }
+    })
+  }
+};
+
+
+
 
 Review comment:
   1 blank line between functions/structs in c++

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367143100
 
 

 ##########
 File path: python/mxnet/ndarray/numpy/_op.py
 ##########
 @@ -6476,3 +6475,126 @@ def bincount(x, weights=None, minlength=0):
     if weights is None:
         return _npi.bincount(x, minlength=minlength, has_weights=False)
     return _npi.bincount(x, weights=weights, minlength=minlength, has_weights=True)
+
+
+@set_module('mxnet.ndarray.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", constant_values=0):
+    """
+    Pad an array.
+    Parameters
+    ----------
+    array : array_like of rank N
+        The array to pad.
+    pad_width : {sequence, array_like, int}
+        Number of values padded to the edges of each axis.
+        ((before_1, after_1), ... (before_N, after_N)) unique pad widths
+        for each axis.
+        ((before, after),) yields same before and after pad for each axis.
+        (pad,) or int is a shortcut for before = after = pad width for all
+        axes.
+    mode : str or function, optional
+        One of the following string values or a user supplied function.
+        'constant' (default)
+            Pads with a constant value.
+        'edge'
+            Pads with the edge values of array.
+        'linear_ramp'
+            not supported yet
+        'maximum'
+            Pads with the maximum value of all of the
+            vector along each axis.
+        'mean'
+            not supported yet
+        'median'
+           not supported yet
+        'minimum'
+            Pads with the minimum value of all of the
+            vector along each axis.
+        'reflect'
+            Pads with the reflection of the vector mirrored on
+            the first and last values of the vector along each
+            axis.
+        'symmetric'
+            Pads with the reflection of the vector mirrored
+            along the edge of the array.
+        'wrap'
+            not supported yet
+        'empty'
+            Pads with undefined values.
+            .. versionadded:: 1.17
+        <function>
+            Padding function, see Notes.
+    stat_length : not supported yet
+    constant_values : scalar, optional
+        Used in 'constant'.  The values to set the padded values for each
+        axis.
+        Default is 0.
+    end_values : not supported yet
+    reflect_type : {'even', 'odd'}, optional
+        only support even now
+    Returns
+    -------
+    pad : ndarray
+        Padded array of rank equal to `array` with shape increased
+        according to `pad_width`.
+    Examples
+    --------
+    >>> a = [1, 2, 3, 4, 5]
+    >>> np.pad(a, (2, 3), 'edge')
+    array([1, 1, 1, ..., 5, 5, 5])
+    >>> np.pad(a, (2, 2), 'maximum')
+    array([5, 5, 1, 2, 3, 4, 5, 5, 5])
+    >>> np.pad(a, (2, 2), 'mean')
+    array([3, 3, 1, 2, 3, 4, 5, 3, 3])
+    >>> a = [[1, 2], [3, 4]]
+    >>> np.pad(a, ((3, 2), (2, 3)), 'minimum')
+    array([[1, 1, 1, 2, 1, 1, 1],
+           [1, 1, 1, 2, 1, 1, 1],
+           [1, 1, 1, 2, 1, 1, 1],
+           [1, 1, 1, 2, 1, 1, 1],
+           [3, 3, 3, 4, 3, 3, 3],
+           [1, 1, 1, 2, 1, 1, 1],
+           [1, 1, 1, 2, 1, 1, 1]])
+    >>> a = [1, 2, 3, 4, 5]
+    >>> np.pad(a, (2, 3), 'reflect')
+    array([3, 2, 1, 2, 3, 4, 5, 4, 3, 2])
+    >>> np.pad(a, (2, 3), 'symmetric')
+    array([2, 1, 1, 2, 3, 4, 5, 5, 4, 3])
+    >>> a = np.arange(6)
+    >>> a = a.reshape((2, 3))
+    >>> np.pad(a, ((2, 2), (2, 2)), pad_with)
+    array([[10, 10, 10, 10, 10, 10, 10],
+           [10, 10, 10, 10, 10, 10, 10],
+           [10, 10,  0,  1,  2, 10, 10],
+           [10, 10,  3,  4,  5, 10, 10],
+           [10, 10, 10, 10, 10, 10, 10],
+           [10, 10, 10, 10, 10, 10, 10]])
+    """
+
+    if array.size == 0:
+        for axis, width_pair in zip(axes, pad_width):
+            if array.shape[axis] == 0 and any(width_pair):
+                raise ValueError(
+                    "can't extend empty axis {} using modes other than "
+                    "'constant' or 'empty'".format(axis)
+                )
+    else:
+        if mode == "constant":
+            return _npi.pad(array, pad_width, 1, reflect_type, constant_values)
+        elif mode == "symmetric" and reflect_type == "even":
+            return _npi.pad(array, pad_width, 2, "even", constant_values)
+        elif mode == "edge":
+            return _npi.pad(array, pad_width, 3, reflect_type, constant_values)
+        elif mode == "reflect" and reflect_type == "even":
+            return _npi.pad(array, pad_width, 4, "even", constant_values)
+        elif mode == "empty":
+            pass
+        elif mode == "maximum":
+            return _npi.pad(array, pad_width, 5, "even", constant_values)
+        elif mode == "minimum":
+            return _npi.pad(array, pad_width, 6, "even", constant_values)
+        else:
+            raise ValueError(
+                "didn't support these modes and reflect_types."
+                )
 
 Review comment:
   no need for line wraps here

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r378677706
 
 

 ##########
 File path: src/operator/numpy/np_pad_op-inl.h
 ##########
 @@ -0,0 +1,725 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include <vector>
+#include <algorithm>
+#include <string>
+#include <utility>
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template <int ndim, typename DTypeShape>
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape<ndim>& coord,
+                               const DTypeShape* shape) {
+  index_t ret = 0;
+  int nndim = ndim;
+  #pragma unroll
+  for (int i = 0; i < nndim; ++i) {
+    ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+/* Compute coordinates from flattened index given shape */
+template<int ndim, typename DTypeShape>
+MSHADOW_XINLINE mshadow::Shape<ndim> uunravel(const int idx,
+                                              const DTypeShape* shape) {
+  mshadow::Shape<ndim> ret;
+  #pragma unroll
+  for (int i = ndim-1, j = idx; i >=0; --i) {
+    auto tmp = j / shape[i];
+    ret[i] = j - tmp*shape[i];
+    j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter<NumpyPadParam> {
+  mxnet::Tuple<mxnet::Tuple<int>> pad_width;
+  int mode;
+  double constant_value;
+  std::string reflect_type;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+    DMLC_DECLARE_FIELD(pad_width)
+    .describe("Number of values padded to the edges of each axis. "
+              "((before_1, after_1), … (before_N,"
+              "after_N)) unique pad widths for each axis. ((before, after),) "
+              "yields same before and"
+              "after pad for each axis. "
+              "(pad,) or int is a shortcut for before = after = pad width for all"
+              "axes.");
+    DMLC_DECLARE_FIELD(mode)
+    .set_default(1)
 
 Review comment:
   suggest better representation of the modes, you could use a combination of enum type and `.add_enum` like:
   ```
   enum NumpyPadMode {kConstant, ...};
   // ...
   DMLC_DECLARE_FIELD(mode)
     .add_enum("constant", kConstant)
     // 'add_enum' for all other modes
     .set_default(kConstant)
   // ...
   
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367139047
 
 

 ##########
 File path: python/mxnet/numpy/multiarray.py
 ##########
 @@ -8517,3 +8516,98 @@ def bincount(x, weights=None, minlength=0):
     array([ 0.3,  0.7,  1.1])
     """
     return _mx_nd_np.bincount(x, weights=weights, minlength=minlength)
+
+@set_module('mxnet.NumPy')
+def pad(array, pad_width, mode="constant", constant_values=0, reflect_type="even"):
+    """
+    Pad an array.
+    Parameters
 
 Review comment:
   extra blank line above

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r375087669
 
 

 ##########
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##########
 @@ -5866,4 +5866,83 @@ def bincount(x, weights=None, minlength=0):
     return _npi.bincount(x, weights=weights, minlength=minlength, has_weights=True)
 
 
+@set_module('mxnet.symbol.numpy')
+def pad(array, pad_width=None, mode="constant", reflect_type="even", constant_values=0):
+    """
+    Pad an array.
+
+    Parameters
+    ----------
+    array : array_like of rank N
+        The array to pad.
+    pad_width : {sequence, array_like, int}
+        Number of values padded to the edges of each axis.
+        ((before_1, after_1), ... (before_N, after_N)) unique pad widths
+        for each axis.
+        ((before, after),) yields same before and after pad for each axis.
+        (pad,) or int is a shortcut for before = after = pad width for all
+        axes.
+    mode : str or function, optional
+        One of the following string values or a user supplied function.
+        'constant' (default)
+            Pads with a constant value.
+        'edge'
+            Pads with the edge values of array.
+        'linear_ramp'
+            not supported yet
+        'maximum'
+            Pads with the maximum value of all of the
+            vector along each axis.
+        'mean'
+            not supported yet
+        'median'
+           not supported yet
+        'minimum'
+            Pads with the minimum value of all of the
+            vector along each axis.
+        'reflect'
+            Pads with the reflection of the vector mirrored on
+            the first and last values of the vector along each
+            axis.
+        'symmetric'
+            Pads with the reflection of the vector mirrored
+            along the edge of the array.
+        'wrap'
+            not supported yet
+        'empty'
+            Pads with undefined values.
+            .. versionadded:: 1.17
+        <function>
+            Padding function, see Notes.
+    stat_length : not supported yet
+    constant_values : scalar, optional
+        Used in 'constant'.  The values to set the padded values for each
+        axis.
+        Default is 0.
+
+    end_values : not supported yet
+    reflect_type : {'even', 'odd'}, optional
+        only support even now
+
+    Returns
+    -------
+    pad : ndarray
+        Padded array of rank equal to `array` with shape increased
+        according to `pad_width`.
+    """
+    if mode == "constant":
+        return _npi.pad(array, pad_width, 1, reflect_type, constant_values)
+    elif mode == "symmetric" and reflect_type == "even":
+        return _npi.pad(array, pad_width, 2, "even", constant_values)
+    elif mode == "edge":
+        return _npi.pad(array, pad_width, 3, reflect_type, constant_values)
+    elif mode == "reflect" and reflect_type == "even":
+        return _npi.pad(array, pad_width, 4, "even", constant_values)
+    elif mode == "maximum":
+        return _npi.pad(array, pad_width, 5, "even", constant_values)
+    elif mode == "minimum":
+        return _npi.pad(array, pad_width, 6, "even", constant_values)
+    else:
+        raise ValueError("didn't support these modes and reflect_types.")
 
 Review comment:
   give more specific error messages about the exact inputs which are not supported.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r375087832
 
 

 ##########
 File path: python/mxnet/ndarray/numpy/_op.py
 ##########
 @@ -6476,3 +6475,119 @@ def bincount(x, weights=None, minlength=0):
     if weights is None:
         return _npi.bincount(x, minlength=minlength, has_weights=False)
     return _npi.bincount(x, weights=weights, minlength=minlength, has_weights=True)
+
+
+@set_module('mxnet.ndarray.numpy')
+def pad(array, pad_width=None, mode="constant", reflect_type="even", constant_values=0):
+    """
+    Pad an array.
+
+    Parameters
+    ----------
+    array : array_like of rank N
+        The array to pad.
+    pad_width : {sequence, array_like, int}
+        Number of values padded to the edges of each axis.
+        ((before_1, after_1), ... (before_N, after_N)) unique pad widths
+        for each axis.
+        ((before, after),) yields same before and after pad for each axis.
+        (pad,) or int is a shortcut for before = after = pad width for all
+        axes.
+    mode : str or function, optional
+        One of the following string values or a user supplied function.
+        'constant' (default)
+            Pads with a constant value.
+        'edge'
+            Pads with the edge values of array.
+        'linear_ramp'
+            not supported yet
+        'maximum'
+            Pads with the maximum value of all of the
+            vector along each axis.
+        'mean'
+            not supported yet
+        'median'
+           not supported yet
+        'minimum'
+            Pads with the minimum value of all of the
+            vector along each axis.
+        'reflect'
+            Pads with the reflection of the vector mirrored on
+            the first and last values of the vector along each
+            axis.
+        'symmetric'
+            Pads with the reflection of the vector mirrored
+            along the edge of the array.
+        'wrap'
+            not supported yet
+        'empty'
+            Pads with undefined values.
+            .. versionadded:: 1.17
+        <function>
+            Padding function, see Notes.
+    stat_length : not supported yet
+    constant_values : scalar, optional
+        Used in 'constant'.  The values to set the padded values for each
+        axis.
+        Default is 0.
+
+    end_values : not supported yet
+    reflect_type : {'even', 'odd'}, optional
+        only support even now
+
+    Returns
+    -------
+    pad : ndarray
+        Padded array of rank equal to `array` with shape increased
+        according to `pad_width`.
+
+    Examples
+    --------
+    >>> a = [1, 2, 3, 4, 5]
+    >>> np.pad(a, (2, 3), 'edge')
+    array([1, 1, 1, ..., 5, 5, 5])
+    >>> np.pad(a, (2, 2), 'maximum')
+    array([5, 5, 1, 2, 3, 4, 5, 5, 5])
+    >>> np.pad(a, (2, 2), 'mean')
+    array([3, 3, 1, 2, 3, 4, 5, 3, 3])
+    >>> a = [[1, 2], [3, 4]]
+    >>> np.pad(a, ((3, 2), (2, 3)), 'minimum')
+    array([[1, 1, 1, 2, 1, 1, 1],
+           [1, 1, 1, 2, 1, 1, 1],
+           [1, 1, 1, 2, 1, 1, 1],
+           [1, 1, 1, 2, 1, 1, 1],
+           [3, 3, 3, 4, 3, 3, 3],
+           [1, 1, 1, 2, 1, 1, 1],
+           [1, 1, 1, 2, 1, 1, 1]])
+    >>> a = [1, 2, 3, 4, 5]
+    >>> np.pad(a, (2, 3), 'reflect')
+    array([3, 2, 1, 2, 3, 4, 5, 4, 3, 2])
+    >>> np.pad(a, (2, 3), 'symmetric')
+    array([2, 1, 1, 2, 3, 4, 5, 5, 4, 3])
+    >>> a = np.arange(6)
+    >>> a = a.reshape((2, 3))
+    >>> np.pad(a, ((2, 2), (2, 2)), pad_with)
+    array([[10, 10, 10, 10, 10, 10, 10],
+           [10, 10, 10, 10, 10, 10, 10],
+           [10, 10,  0,  1,  2, 10, 10],
+           [10, 10,  3,  4,  5, 10, 10],
+           [10, 10, 10, 10, 10, 10, 10],
+           [10, 10, 10, 10, 10, 10, 10]])
+    """
+
+    if not isinstance(array, NDArray):
+        raise TypeError("Input data should be NDarray")
+    if mode == "constant":
+        return _npi.pad(array, pad_width, 1, reflect_type, constant_values)
+    elif mode == "symmetric" and reflect_type == "even":
+        return _npi.pad(array, pad_width, 2, "even", constant_values)
+    elif mode == "edge":
+        return _npi.pad(array, pad_width, 3, reflect_type, constant_values)
+    elif mode == "reflect" and reflect_type == "even":
+        return _npi.pad(array, pad_width, 4, "even", constant_values)
+    elif mode == "maximum":
+        return _npi.pad(array, pad_width, 5, "even", constant_values)
+    elif mode == "minimum":
+        return _npi.pad(array, pad_width, 6, "even", constant_values)
+    else:
+        raise ValueError("didn't support these modes and reflect_types.")
 
 Review comment:
   same here, give a more meaningful error message.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367147434
 
 

 ##########
 File path: src/operator/numpy/np_pad_op-inl.h
 ##########
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include <vector>
+#include <algorithm>
+#include <string>
+#include <utility>
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template <size_t ndim, typename xpu>
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape<ndim>& coord,
+                               const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+    ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template<size_t ndim, typename xpu>
+MSHADOW_XINLINE mshadow::Shape<ndim> uunravel(index_t idx,
+                                              const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  mshadow::Shape<ndim> ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+    auto tmp = j / shape[i];
+    ret[i] = j - tmp*shape[i];
+    j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter<NumpyPadParam> {
+  mxnet::Tuple<Tuple<int>> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+    DMLC_DECLARE_FIELD(pad_width)
+        .describe("Number of values padded to the edges of each axis. "
+                  "((before_1, after_1), … (before_N,"
+                  "after_N)) unique pad widths for each axis. ((before, after),) "
+                  "yields same before and"
+                  "after pad for each axis. "
+                  "(pad,) or int is a shortcut for before = after = pad width for all"
+                  "axes.");
+    DMLC_DECLARE_FIELD(mode)
+        .set_default(1)
+        .describe("str or function, optional");
+    DMLC_DECLARE_FIELD(reflect_type)
+        .set_default("even")
+        .describe("Used in ‘reflect’, and ‘symmetric’. "
+                  "The ‘even’ style is the default with an unaltered reflection around "
+                  "the edge value. For the ‘odd’ style,"
+                  "the extended part of the array is created by subtracting the "
+                  "reflected values from two times the edge value.");
+    DMLC_DECLARE_FIELD(constant_value)
+        .set_default(0.0)
+        .describe("Used in ‘constant’. The values to set the padded values for each axis."
+                  "((before_1, after_1), ... (before_N, after_N)) unique pad constants for"
+                  "each axis."
+                  "((before, after),) yields same before and after constants for each axis."
+                  "(constant,) or constant is a shortcut for before = after = constant for all"
+                  "axes."
+                  "Default is 0.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+                                       const mxnet::Tuple<Tuple<int>> pad_width) {
+  if (ishape.ndim() == 1) {
+    auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+    return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+    int i;
+    int sshape_number = ishape.ndim();
+    mxnet::TShape oshape(ishape.ndim(), -1);
+    for (i = ishape.ndim() - 1; i >=0; i--) {
+      int base = ishape[i];
+      base = base + pad_width[i][0] + pad_width[i][1];
+      oshape[i] = base;
+    }
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+inline bool NumpyPadOpShape(const nnvm::NodeAttrs& attrs,
+                            mxnet::ShapeVector* in_attrs,
+                            mxnet::ShapeVector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  const mxnet::TShape& ishape = (*in_attrs)[0];
+  if (!mxnet::ndim_is_known(ishape)) {
+    return false;
+  }
+  const NumpyPadParam& param = nnvm::get<NumpyPadParam>(attrs.parsed);
+
+  mxnet::TShape oshape = NumpyPadShapeImpl(ishape, param.pad_width);
+
+  if (shape_is_none(oshape)) {
+    LOG(FATAL) << "Pad does not exist.";
+  }
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, oshape);
+
+  return shape_is_known(out_attrs->at(0));
+}
+
+
+inline bool NumpyPadOpType(const nnvm::NodeAttrs &attrs,
+                           std::vector<int> *in_attrs,
+                           std::vector<int> *out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  TYPE_ASSIGN_CHECK(*out_attrs, 0, (*in_attrs)[0]);
+  TYPE_ASSIGN_CHECK(*in_attrs, 0, (*out_attrs)[0]);
+  return (*out_attrs)[0] != -1;
+}
+
+template <typename xpu, int req, bool back>
+struct constant_pad {
+  template <typename DType>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const mshadow::Tensor<xpu, 1, index_t>& ishape,
+                                  const mshadow::Tensor<xpu, 1, index_t>& oshape,
+                                  mxnet::Tuple<Tuple<int>> pad_width,
+                                  double constant_value,
+                                  size_t ndim) {
+    using namespace mxnet_op;
+    MXNET_NDIM_SWITCH(ndim, NDim, {
+      auto j = uunravel<NDim>(i, oshape);
+      size_t m;
+      bool origin = true;
+      for (m = 0; m < ndim; m++) {
+        if (j[m] >= pad_width[m][0] && j[m] < pad_width[m][0] + ishape[m]) {
+          continue;
+        } else {
+          origin = false;
+          KERNEL_ASSIGN(out[i], req, constant_value);
+        }
+      }
+      if (origin) {
+        for (m = 0; m < ndim; m++) {
+          j[m] = j[m] - pad_width[m][0];
+        }
+        index_t l = rravel<NDim>(j, ishape);
+        KERNEL_ASSIGN(out[i], req, a[l]);
+      }
+    })
+  }
+};
+
+template <typename xpu, int req, bool back>
+struct pad_copy{
+  template<typename DType>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const mshadow::Tensor<xpu, 1, index_t>& ishape,
+                                  const mshadow::Tensor<xpu, 1, index_t>& oshape,
+                                  mxnet::Tuple<Tuple<int>> pad_width,
+                                  size_t ndim){
+    using namespace mxnet_op;
+    MXNET_NDIM_SWITCH(ndim, NDim, {
+      auto j = uunravel<NDim>(i, oshape);
+      size_t m;
+      bool origin = true;
+      // if is origin
+      for (m = 0; m < ndim; m++) {
+        if (j[m] >= pad_width[m][0] && j[m] < pad_width[m][0] + ishape[m]) {
+          continue;
+        } else {
+          origin = false;
+          break;
+        }
+      }
+      if (origin) {
+        for (m = 0; m < ndim; m++) {
+          j[m] = j[m] - pad_width[m][0];
+        }
+        int l = rravel<NDim>(j, ishape);
+        KERNEL_ASSIGN(out[i], req, a[l]);
+      } else {
+        return;
+      }
+    })
+  }
+};
+
+
+
+template <typename xpu, int req, bool back>
+struct symmetric_pad{
+  template<typename DType>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const mshadow::Tensor<xpu, 1, index_t>& ishape,
+                                  const mshadow::Tensor<xpu, 1, index_t>& oshape,
+                                  mxnet::Tuple<Tuple<int>> pad_width,
+                                  size_t index,
+                                  size_t ndim){
+    using namespace mxnet_op;
+    MXNET_NDIM_SWITCH(ndim, NDim, {
+      auto j = uunravel<NDim>(i, oshape);
+      size_t m;
+      bool origin = true;
+
+      for (m = 0; m < index; m++) {
+        if (j[m] < pad_width[m][0] || j[m] >= pad_width[m][0] + ishape[m]) {
+          // we can not do this now
+          return;
+        }
+      }
+
+      for (m = 0; m < ndim; m++) {
+        if (j[m] >= pad_width[m][0] && j[m] < pad_width[m][0] + ishape[m]) {
+          continue;
+        } else {
+          origin = false;
+          break;
+        }
+      }
+      if (origin) {
+        // this thread is in the origin position, then return
+        return;
+      }
+      if (j[index] < pad_width[index][0]) {
+      // we need to do the assignment
+        int distance = pad_width[index][0] - j[index];
+        int total = ishape[index];
+        // the round of this element
+        int round = (distance - 1) / total;
+        int position = distance % total;
+        if (position == 0) {
+          position = ishape[index];
+        }
+        if (round % 2 == 0) {
+          j[index] = pad_width[index][0] + position - 1;
+        } else {
+          j[index] = pad_width[index][0] + ishape[index] - 1 - (position - 1);
+        }
+        int l = rravel<NDim>(j, oshape);
+        KERNEL_ASSIGN(out[i], req, out[l]);
+      } else if (j[index] >= (pad_width[index][0]+ishape[index])) {
+        int distance = (j[index]+1) - (pad_width[index][0]+ishape[index]);
+        int total = ishape[index];
+        int position = distance % total;
+        int round = (distance - 1) / total;
+        if (position == 0) {
+          position = ishape[index];
+        }
+        if (round % 2 == 0) {
+          j[index] =  pad_width[index][0] + ishape[index] - 1 - (position - 1);
+        } else {
+          j[index] = pad_width[index][0] + position - 1;
+        }
+        int l = rravel<NDim>(j, oshape);
+        KERNEL_ASSIGN(out[i], req, out[l]);
+      }
+    })
+  }
+};
+
+template <typename xpu, int req, bool back>
+struct edge_pad{
+  template<typename DType>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const mshadow::Tensor<xpu, 1, index_t>& ishape,
+                                  const mshadow::Tensor<xpu, 1, index_t>& oshape,
+                                  mxnet::Tuple<Tuple<int>> pad_width,
+                                  size_t index,
+                                  size_t ndim){
+    using namespace mxnet_op;
+    MXNET_NDIM_SWITCH(ndim, NDim, {
+      auto j = uunravel<NDim>(i, oshape);
+      size_t m;
+      bool origin = true;
+      for (m = 0; m < index; m++) {
+        if (j[m] < pad_width[m][0] || j[m] >= pad_width[m][0] + ishape[m]) {
+        // we can not do this now, since this is a former axis
+          return;
+        }
+      }
+      for (m = 0; m < ndim; m++) {
+        if (j[m] >= pad_width[m][0] && j[m] < pad_width[m][0] + ishape[m]) {
+          continue;
+        } else {
+          origin = false;
+          break;
+        }
+      }
+      if (origin) {
+      // this thread is in the origin position, then return
+        return;
+      }
+      if (j[index] < pad_width[index][0]) {
+      // we need to do the assignment
+        j[index] = pad_width[index][0];
+        int l = rravel<NDim>(j, oshape);
+        KERNEL_ASSIGN(out[i], req, out[l]);
+      } else if (j[index] >= (pad_width[index][0]+ishape[index])) {
+        j[index] =  pad_width[index][0] + ishape[index] - 1;
+        int l = rravel<NDim>(j, oshape);
+        KERNEL_ASSIGN(out[i], req, out[l]);
+      }
+    })
+  }
+};
+
+template <typename xpu, int req, bool back>
+struct reflect_pad{
+  template<typename DType>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const mshadow::Tensor<xpu, 1, index_t>& ishape,
+                                  const mshadow::Tensor<xpu, 1, index_t>& oshape,
+                                  mxnet::Tuple<Tuple<int>> pad_width,
+                                  size_t index,
+                                  size_t ndim){
+    using namespace mxnet_op;
+    MXNET_NDIM_SWITCH(ndim, NDim, {
+      auto j = uunravel<NDim>(i, oshape);
+      size_t m;
+      bool origin = true;
+      for (m = 0; m < index; m++) {
+        if (j[m] < pad_width[m][0] || j[m] >= pad_width[m][0] + ishape[m]) {
+          // we can not do this now
+          return;
+        }
+      }
+      for (m = 0; m < ndim; m++) {
+        if (j[m] >= pad_width[m][0] && j[m] < pad_width[m][0] + ishape[m]) {
+          continue;
+        } else {
+          origin = false;
+          break;
+        }
+      }
+      if (origin) {
+        // this thread is in the origin position, then return
+        return;
+      }
+      if (j[index] < pad_width[index][0]) {
+        // we need to do the assignment
+        int distance = pad_width[index][0] - j[index];
+        int total = ishape[index];
+        if (total == 1) {
+          j[index] = pad_width[index][0];
+          int l = rravel<NDim>(j, oshape);
+          KERNEL_ASSIGN(out[i], req, out[l]);
+          return;
+        }
+        int round = (distance - 1) / (total - 1);
+        if (round % 2 == 0) {
+          int position = (distance + round) % total;
+          j[index] = pad_width[index][0] + position;
+        } else {
+          int position = (distance + round) % total;
+          j[index] =  pad_width[index][0] + ishape[index] - 1 - (position);
+        }
+        int l = rravel<NDim>(j, oshape);
+        KERNEL_ASSIGN(out[i], req, out[l]);
+      } else if (j[index] >= (pad_width[index][0] + ishape[index])) {
+        int distance = (j[index]+1) - (pad_width[index][0] + ishape[index]);
+        int total = ishape[index];
+        if (total == 1) {
+          j[index] = pad_width[index][0];
+          int l = rravel<NDim>(j, oshape);
+          KERNEL_ASSIGN(out[i], req, out[l]);
+          return;
+        }
+        int round = (distance - 1) / (total - 1);
+        if (round % 2 == 0) {
+          int position = (distance + round) % total;
+          j[index] =  pad_width[index][0] + ishape[index] - 1 - (position);
+        } else {
+          int position = (distance + round) % total;
+          j[index] = pad_width[index][0] + position;
+        }
+        int l = rravel<NDim>(j, oshape);
+        KERNEL_ASSIGN(out[i], req, out[l]);
+    }
+    })
+  }
+};
+
+template <typename xpu, int req, bool back>
+struct max_pad{
+  template<typename DType>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const mshadow::Tensor<xpu, 1, index_t>& ishape,
+                                  const mshadow::Tensor<xpu, 1, index_t>& oshape,
+                                  mxnet::Tuple<Tuple<int>> pad_width,
+                                  size_t index,
+                                  size_t ndim){
+    using namespace mxnet_op;
+    MXNET_NDIM_SWITCH(ndim, NDim, {
+      auto j = uunravel<NDim>(i, oshape);
+      size_t m;
+      bool origin = true;
+      for (m = 0; m < index; m++) {
+        if (j[m] < pad_width[m][0] || j[m] >= pad_width[m][0] + ishape[m]) {
+          // we can not do this now
+          return;
+        }
+      }
+      for (m = 0; m < ndim; m++) {
+        if (j[m] >= pad_width[m][0] && j[m] < pad_width[m][0] + ishape[m]) {
+          continue;
+        } else {
+          origin = false;
+          break;
+        }
+      }
+      if (origin) {
+        // this thread is in the origin position, then return
+        return;
+      }
+
+      if (j[index] < pad_width[index][0] || j[index] >= pad_width[index][0] + ishape[index]) {
+        j[index] = pad_width[index][0];
+        int l = rravel<NDim>(j, oshape);
+        int max_count = 0;
+        auto max_value = out[l];
+        for (max_count = 0; max_count < ishape[index]; max_count++) {
+          j[index] = pad_width[index][0] + max_count;
+          l = rravel<NDim>(j, oshape);
+          if (out[l] > max_value) {
+              max_value = out[l];
+          }
+        }
+        KERNEL_ASSIGN(out[i], req, max_value);
+      }
+    })
+  }
+};
+
+template <typename xpu, int req, bool back>
+struct min_pad{
+  template<typename DType>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const mshadow::Tensor<xpu, 1, index_t>& ishape,
+                                  const mshadow::Tensor<xpu, 1, index_t>& oshape,
+                                  mxnet::Tuple<Tuple<int>> pad_width,
+                                  size_t index,
+                                  size_t ndim){
+    using namespace mxnet_op;
+    MXNET_NDIM_SWITCH(ndim, NDim, {
+      auto j = uunravel<NDim>(i, oshape);
+      size_t m;
+      bool origin = true;
+      for (m = 0; m < index; m++) {
+        if (j[m] < pad_width[m][0] || j[m] >= pad_width[m][0] + ishape[m]) {
+          // we can not do this now
+          return;
+        }
+      }
+      for (m = 0; m < ndim; m++) {
+        if (j[m] >= pad_width[m][0] && j[m] < pad_width[m][0] + ishape[m]) {
+          continue;
+        } else {
+          origin = false;
+          break;
+        }
+      }
+      if (origin) {
+        // this thread is in the origin position, then return
+        return;
+      }
+      if (j[index] < pad_width[index][0] || j[index] >= (pad_width[index][0] + ishape[index])) {
+        j[index] = pad_width[index][0];
+        int l = rravel<NDim>(j, oshape);
+        int min_count = 0;
+        auto min_value = out[l];
+        for (min_count = 0; min_count < ishape[index]; min_count++) {
+          j[index] = pad_width[index][0] + min_count;
+          l = rravel<NDim>(j, oshape);
+          if (out[l] < min_value) {
+              min_value = out[l];
+          }
+        }
+        j = uunravel<NDim>(i, oshape);
+        KERNEL_ASSIGN(out[i], req, min_value);
+      } else {
+        return;
+      }
+    })
+  }
+};
+
+
+template <typename xpu, int req, bool back>
+struct pad_grad{
+  template<typename DType>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const mshadow::Tensor<xpu, 1, index_t>& ishape,
+                                  const mshadow::Tensor<xpu, 1, index_t>& oshape,
+                                  mxnet::Tuple<Tuple<int>> pad_width
+                                  ){
 
 Review comment:
   ```c++
                                     mxnet::Tuple<Tuple<int>> pad_width){
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367144346
 
 

 ##########
 File path: src/operator/numpy/np_pad_op-inl.h
 ##########
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include <vector>
+#include <algorithm>
+#include <string>
+#include <utility>
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template <size_t ndim, typename xpu>
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape<ndim>& coord,
+                               const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+    ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template<size_t ndim, typename xpu>
+MSHADOW_XINLINE mshadow::Shape<ndim> uunravel(index_t idx,
+                                              const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  mshadow::Shape<ndim> ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+    auto tmp = j / shape[i];
+    ret[i] = j - tmp*shape[i];
+    j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter<NumpyPadParam> {
+  mxnet::Tuple<Tuple<int>> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+    DMLC_DECLARE_FIELD(pad_width)
+        .describe("Number of values padded to the edges of each axis. "
+                  "((before_1, after_1), … (before_N,"
+                  "after_N)) unique pad widths for each axis. ((before, after),) "
+                  "yields same before and"
+                  "after pad for each axis. "
+                  "(pad,) or int is a shortcut for before = after = pad width for all"
+                  "axes.");
+    DMLC_DECLARE_FIELD(mode)
+        .set_default(1)
+        .describe("str or function, optional");
+    DMLC_DECLARE_FIELD(reflect_type)
+        .set_default("even")
+        .describe("Used in ‘reflect’, and ‘symmetric’. "
+                  "The ‘even’ style is the default with an unaltered reflection around "
+                  "the edge value. For the ‘odd’ style,"
+                  "the extended part of the array is created by subtracting the "
+                  "reflected values from two times the edge value.");
+    DMLC_DECLARE_FIELD(constant_value)
+        .set_default(0.0)
+        .describe("Used in ‘constant’. The values to set the padded values for each axis."
+                  "((before_1, after_1), ... (before_N, after_N)) unique pad constants for"
+                  "each axis."
+                  "((before, after),) yields same before and after constants for each axis."
+                  "(constant,) or constant is a shortcut for before = after = constant for all"
+                  "axes."
+                  "Default is 0.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+                                       const mxnet::Tuple<Tuple<int>> pad_width) {
+  if (ishape.ndim() == 1) {
+    auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+    return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+    int i;
+    int sshape_number = ishape.ndim();
+    mxnet::TShape oshape(ishape.ndim(), -1);
+    for (i = ishape.ndim() - 1; i >=0; i--) {
+      int base = ishape[i];
+      base = base + pad_width[i][0] + pad_width[i][1];
+      oshape[i] = base;
+    }
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+inline bool NumpyPadOpShape(const nnvm::NodeAttrs& attrs,
+                            mxnet::ShapeVector* in_attrs,
+                            mxnet::ShapeVector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  const mxnet::TShape& ishape = (*in_attrs)[0];
+  if (!mxnet::ndim_is_known(ishape)) {
+    return false;
+  }
+  const NumpyPadParam& param = nnvm::get<NumpyPadParam>(attrs.parsed);
+
+  mxnet::TShape oshape = NumpyPadShapeImpl(ishape, param.pad_width);
+
+  if (shape_is_none(oshape)) {
+    LOG(FATAL) << "Pad does not exist.";
+  }
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, oshape);
+
+  return shape_is_known(out_attrs->at(0));
+}
+
+
+inline bool NumpyPadOpType(const nnvm::NodeAttrs &attrs,
 
 Review comment:
   Move this function to `.cc`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367144282
 
 

 ##########
 File path: src/operator/numpy/np_pad_op-inl.h
 ##########
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include <vector>
+#include <algorithm>
+#include <string>
+#include <utility>
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template <size_t ndim, typename xpu>
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape<ndim>& coord,
+                               const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+    ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template<size_t ndim, typename xpu>
+MSHADOW_XINLINE mshadow::Shape<ndim> uunravel(index_t idx,
+                                              const mshadow::Tensor<xpu, 1, index_t>& shape) {
+  mshadow::Shape<ndim> ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+    auto tmp = j / shape[i];
+    ret[i] = j - tmp*shape[i];
+    j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter<NumpyPadParam> {
+  mxnet::Tuple<Tuple<int>> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+    DMLC_DECLARE_FIELD(pad_width)
+        .describe("Number of values padded to the edges of each axis. "
+                  "((before_1, after_1), … (before_N,"
+                  "after_N)) unique pad widths for each axis. ((before, after),) "
+                  "yields same before and"
+                  "after pad for each axis. "
+                  "(pad,) or int is a shortcut for before = after = pad width for all"
+                  "axes.");
+    DMLC_DECLARE_FIELD(mode)
+        .set_default(1)
+        .describe("str or function, optional");
+    DMLC_DECLARE_FIELD(reflect_type)
+        .set_default("even")
+        .describe("Used in ‘reflect’, and ‘symmetric’. "
+                  "The ‘even’ style is the default with an unaltered reflection around "
+                  "the edge value. For the ‘odd’ style,"
+                  "the extended part of the array is created by subtracting the "
+                  "reflected values from two times the edge value.");
+    DMLC_DECLARE_FIELD(constant_value)
+        .set_default(0.0)
+        .describe("Used in ‘constant’. The values to set the padded values for each axis."
+                  "((before_1, after_1), ... (before_N, after_N)) unique pad constants for"
+                  "each axis."
+                  "((before, after),) yields same before and after constants for each axis."
+                  "(constant,) or constant is a shortcut for before = after = constant for all"
+                  "axes."
+                  "Default is 0.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+                                       const mxnet::Tuple<Tuple<int>> pad_width) {
+  if (ishape.ndim() == 1) {
+    auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+    return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+    int i;
+    int sshape_number = ishape.ndim();
+    mxnet::TShape oshape(ishape.ndim(), -1);
+    for (i = ishape.ndim() - 1; i >=0; i--) {
+      int base = ishape[i];
+      base = base + pad_width[i][0] + pad_width[i][1];
+      oshape[i] = base;
+    }
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+inline bool NumpyPadOpShape(const nnvm::NodeAttrs& attrs,
 
 Review comment:
   Move this function to `.cc`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r378687060
 
 

 ##########
 File path: src/operator/numpy/np_pad_op-inl.h
 ##########
 @@ -0,0 +1,725 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include <vector>
+#include <algorithm>
+#include <string>
+#include <utility>
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template <int ndim, typename DTypeShape>
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape<ndim>& coord,
+                               const DTypeShape* shape) {
+  index_t ret = 0;
+  int nndim = ndim;
+  #pragma unroll
+  for (int i = 0; i < nndim; ++i) {
+    ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+/* Compute coordinates from flattened index given shape */
+template<int ndim, typename DTypeShape>
+MSHADOW_XINLINE mshadow::Shape<ndim> uunravel(const int idx,
+                                              const DTypeShape* shape) {
+  mshadow::Shape<ndim> ret;
+  #pragma unroll
+  for (int i = ndim-1, j = idx; i >=0; --i) {
+    auto tmp = j / shape[i];
+    ret[i] = j - tmp*shape[i];
+    j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter<NumpyPadParam> {
+  mxnet::Tuple<mxnet::Tuple<int>> pad_width;
+  int mode;
+  double constant_value;
+  std::string reflect_type;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+    DMLC_DECLARE_FIELD(pad_width)
+    .describe("Number of values padded to the edges of each axis. "
+              "((before_1, after_1), … (before_N,"
+              "after_N)) unique pad widths for each axis. ((before, after),) "
+              "yields same before and"
+              "after pad for each axis. "
+              "(pad,) or int is a shortcut for before = after = pad width for all"
+              "axes.");
+    DMLC_DECLARE_FIELD(mode)
+    .set_default(1)
+    .describe("str or function, optional");
+    DMLC_DECLARE_FIELD(constant_value)
+    .set_default(0.0)
+    .describe("Used in ‘constant’. The values to set the padded values for each axis."
+              "((before_1, after_1), ... (before_N, after_N)) unique pad constants for"
+              "each axis."
+              "((before, after),) yields same before and after constants for each axis."
+              "(constant,) or constant is a shortcut for before = after = constant for all"
+              "axes."
+              "Default is 0.");
+    DMLC_DECLARE_FIELD(reflect_type)
+    .set_default("even")
+    .describe("Used in ‘reflect’, and ‘symmetric’. "
+              "The ‘even’ style is the default with an unaltered reflection around "
+              "the edge value. For the ‘odd’ style,"
+              "the extended part of the array is created by subtracting the "
+              "reflected values from two times the edge value.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+                                       const mxnet::Tuple<Tuple<int>> pad_width) {
+  if (ishape.ndim() == 1) {
+    auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+    return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+    int i;
+    mxnet::TShape oshape(ishape.ndim(), -1);
+    for (i = ishape.ndim() - 1; i >=0; i--) {
+      int base = ishape[i];
+      base = base + pad_width[i][0] + pad_width[i][1];
+      oshape[i] = base;
+    }
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+template <typename xpu, int req, bool back, int ndim>
+struct constant_pad {
+  template <typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  double constant_value) {
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] && indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        KERNEL_ASSIGN(out[i], req, constant_value);
+      }
+    }
+    if (origin) {
+      for (m = 0; m < ndim; m++) {
+        indexshape[m] = indexshape[m] - indexwidth[m * 2];
+      }
+      index_t l = rravel<ndim>(j, ishape);
+      KERNEL_ASSIGN(out[i], req, a[l]);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct pad_copy {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    // if is origin
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] && indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      for (m = 0; m < ndim; m++) {
+        indexshape[m] = indexshape[m] - indexwidth[m * 2];
+      }
+      int l = rravel<ndim>(j, ishape);
+      KERNEL_ASSIGN(out[i], req, a[l]);
+    } else {
+      return;
+    }
+  }
+};
+
+template <typename xpu, int req, bool bac, int ndim>
+struct symmetric_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] || indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] && indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2]) {
+    // we need to do the assignment
+      int distance = indexwidth[index * 2] - indexshape[index];
+      int total = ishape[index];
+      // the round of this element
+      int round = (distance - 1) / total;
+      int position = distance % total;
+      if (position == 0) {
+        position = ishape[index];
+      }
+      if (round % 2 == 0) {
+        indexshape[index] = indexwidth[index * 2] + position - 1;
+      } else {
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position - 1);
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    } else if (indexshape[index] >= (indexwidth[index * 2]+ishape[index])) {
+      int distance = (indexshape[index]+1) - (indexwidth[index * 2]+ishape[index]);
+      int total = ishape[index];
+      int position = distance % total;
+      int round = (distance - 1) / total;
+      if (position == 0) {
+        position = ishape[index];
+      }
+      if (round % 2 == 0) {
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position - 1);
+      } else {
+        indexshape[index] = indexwidth[index * 2] + position - 1;
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct edge_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+      // we can not do this now, since this is a former axis
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+    // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2]) {
+    // we need to do the assignment
+      indexshape[index] = indexwidth[index * 2];
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    } else if (indexshape[index] >= (indexwidth[index * 2]+ishape[index])) {
+      indexshape[index] = indexwidth[index * 2] + ishape[index] - 1;
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct reflect_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2]) {
+      // we need to do the assignment
+      int distance = indexwidth[index * 2] - indexshape[index];
+      int total = ishape[index];
+      if (total == 1) {
+        indexshape[index] = indexwidth[index * 2];
+        int l = rravel<ndim>(j, oshape);
+        KERNEL_ASSIGN(out[i], req, out[l]);
+        return;
+      }
+      int round = (distance - 1) / (total - 1);
+      if (round % 2 == 0) {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + position;
+      } else {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position);
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+    } else if (indexshape[index] >= (indexwidth[index * 2] + ishape[index])) {
+      int distance = (indexshape[index]+1) - (indexwidth[index * 2] + ishape[index]);
+      int total = ishape[index];
+      if (total == 1) {
+        indexshape[index] = indexwidth[index * 2];
+        int l = rravel<ndim>(j, oshape);
+        KERNEL_ASSIGN(out[i], req, out[l]);
+        return;
+      }
+      int round = (distance - 1) / (total - 1);
+      if (round % 2 == 0) {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + ishape[index] - 1 - (position);
+      } else {
+        int position = (distance + round) % total;
+        indexshape[index] = indexwidth[index * 2] + position;
+      }
+      int l = rravel<ndim>(j, oshape);
+      KERNEL_ASSIGN(out[i], req, out[l]);
+  }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct max_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+
+    if (indexshape[index] < indexwidth[index * 2] ||
+        indexshape[index] >= indexwidth[index * 2] + ishape[index]) {
+      indexshape[index] = indexwidth[index * 2];
+      int l = rravel<ndim>(j, oshape);
+      int max_count = 0;
+      auto max_value = out[l];
+      for (max_count = 0; max_count < ishape[index]; max_count++) {
+        indexshape[index] = indexwidth[index * 2] + max_count;
+        l = rravel<ndim>(j, oshape);
+        if (out[l] > max_value) {
+            max_value = out[l];
+        }
+      }
+      KERNEL_ASSIGN(out[i], req, max_value);
+    }
+  }
+};
+
+template <typename xpu, int req, bool back, int ndim>
+struct min_pad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape* ishape,
+                                  const DTypeShape* oshape,
+                                  mshadow::Shape<ndim*2> width,
+                                  size_t index){
+    using namespace mxnet_op;
+    auto j = uunravel<ndim>(i, oshape);
+    size_t m;
+    bool origin = true;
+    index_t* indexwidth = width.shape_;
+    index_t* indexshape = j.shape_;
+    for (m = 0; m < index; m++) {
+      if (indexshape[m] < indexwidth[m * 2] ||
+          indexshape[m] >= indexwidth[m * 2] + ishape[m]) {
+        // we can not do this now
+        return;
+      }
+    }
+    for (m = 0; m < ndim; m++) {
+      if (indexshape[m] >= indexwidth[m * 2] &&
+          indexshape[m] < indexwidth[m * 2] + ishape[m]) {
+        continue;
+      } else {
+        origin = false;
+        break;
+      }
+    }
+    if (origin) {
+      // this thread is in the origin position, then return
+      return;
+    }
+    if (indexshape[index] < indexwidth[index * 2] ||
+        indexshape[index] >= (indexwidth[index * 2] + ishape[index])) {
+      indexshape[index] = indexwidth[index * 2];
+      int l = rravel<ndim>(j, oshape);
+      int min_count = 0;
+      auto min_value = out[l];
+      for (min_count = 0; min_count < ishape[index]; min_count++) {
+        indexshape[index] = indexwidth[index * 2] + min_count;
+        l = rravel<ndim>(j, oshape);
+        if (out[l] < min_value) {
+            min_value = out[l];
+        }
+      }
+      j = uunravel<ndim>(i, oshape);
+      KERNEL_ASSIGN(out[i], req, min_value);
+    } else {
+      return;
+    }
+  }
+};
+
+
+template <typename xpu, int req, bool back>
+struct pad_grad {
+  template<typename DType, typename DTypeShape>
+  MSHADOW_XINLINE static void Map(index_t i, DType *out, const DType *a,
+                                  const DTypeShape *ishape,
+                                  const DTypeShape *oshape){
+    using namespace mxnet_op;
+    KERNEL_ASSIGN(out[i], req, 1);
+  }
+};
+
+template<typename xpu, bool back, typename ShapeDType>
+void NumpyPadOpImpl(const TBlob& in_data,
+                    const TBlob& out_data,
+                    ShapeDType* ishape,
+                    ShapeDType* oshape,
+                    index_t dsize,
+                    const NumpyPadParam& param,
+                    const std::vector<OpReqType>& req,
+                    mxnet_op::Stream<xpu> *s) {
+  using namespace mxnet_op;
+  using namespace mshadow;
+  int mode = param.mode;
+  int ndim = in_data.ndim();
+  MXNET_NDIM_SWITCH(ndim, NDim, {
+    mshadow::Shape<NDim*2> width;
+    int dimcounter = 0;
+    index_t* odptr = reinterpret_cast<index_t*>(oshape);
+    if (ndim == 1) {
+      width[0] = param.pad_width[0][0];
+      width[1] = param.pad_width[1][0];
+    } else {
+      for (dimcounter = 0; dimcounter < NDim; dimcounter++) {
+        width[dimcounter*2] = param.pad_width[dimcounter][0];
+        width[dimcounter*2 + 1] = param.pad_width[dimcounter][1];
+      }
+    }
+    if (!back) {
+      index_t* idptr = reinterpret_cast<index_t*>(ishape);
+      if (mode == 1) {
+      // constant padding start
+        MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+          MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+            Kernel<constant_pad<xpu, req_type, back, NDim>, xpu>::Launch(
+              s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+              idptr, odptr, width, param.constant_value);
+          });
+        });
+      // constant padding end
+      } else {
+        MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+          MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+            Kernel<pad_copy<xpu, req_type, back, NDim>, xpu>::Launch(
+              s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+              idptr, odptr, width);
+          });
+        });
+        index_t index;
+        index_t dim = ndim;
+        if (mode == 2) {
+          // symmetric padding start
+          for (index = dim-1; index >= 0; index--) {
+            MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+              MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+                Kernel<symmetric_pad<xpu, req_type, back, NDim>, xpu>::Launch(
+                  s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+                  idptr, odptr, width, index);
+              });
+            });
+          }
+        } else if (mode == 3) {
+          // edge padding start
+          for (index = dim-1; index >= 0; index--) {
+            MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+              MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+                Kernel<edge_pad<xpu, req_type, back, NDim>, xpu>::Launch(
+                  s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+                  idptr, odptr, width, index);
+              });
+            });
+          }
+        } else if (mode == 4) {
+          // reflect padding start
+          for (index = dim-1; index >= 0; index--) {
+            MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+              MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+                Kernel<reflect_pad<xpu, req_type, back, NDim>, xpu>::Launch(
+                  s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+                  idptr, odptr, width, index);
+              });
+            });
+          }
+        } else if (mode == 5) {
+          for (index = dim-1; index >= 0; index--) {
+            MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+              MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+                Kernel<max_pad<xpu, req_type, back, NDim>, xpu>::Launch(
+                  s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+                  idptr, odptr, width, index);
+              });
+            });
+          }
+        } else if (mode == 6) {
+          for (index = dim-1; index >= 0; index--) {
+            MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+              MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+                Kernel<min_pad<xpu, req_type, back, NDim>, xpu>::Launch(
+                  s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+                  idptr, odptr, width, index);
+              });
+            });
+          }
+        } else {
+          // not support yet
+        }
+      }
+    } else {
+      index_t* idptr = reinterpret_cast<index_t*>(ishape);
+      MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+        MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+          Kernel<pad_grad<xpu, req_type, back>, xpu>::Launch(
+            s, dsize, out_data.dptr<DType>(), in_data.dptr<DType>(),
+            idptr, odptr);
+        });
+      });
+    }
+  })
+}
+
+
+template<typename xpu>
+void NumpyPadOpForward(const nnvm::NodeAttrs& attrs,
+                       const OpContext& ctx,
+                       const std::vector<TBlob>& inputs,
+                       const std::vector<OpReqType>& req,
+                       const std::vector<TBlob>& outputs) {
+  MXNET_NDIM_SWITCH(inputs[0].ndim(), NDim, {
+    using namespace mxnet_op;
+    using namespace mshadow;
+    CHECK_EQ(inputs.size(), 1U);
+    CHECK_EQ(outputs.size(), 1U);
+    CHECK_EQ(req.size(), 1U);
+    CHECK_EQ(req[0], kWriteTo);
+    Stream<xpu> *s = ctx.get_stream<xpu>();
+    const TBlob& in_data = inputs[0];
+    const TBlob& out_data = outputs[0];
+    size_t ts = in_data.ndim();
+    size_t count;
+    mshadow::Shape<NDim> inshape;
+    for (count = 0; count < ts; count++) {
+      inshape[count] = static_cast<index_t>((in_data.shape_)[count]);
+    }
+
+    Tensor<xpu, 1, index_t> tsp = ctx.requested[0].
+                                  get_space_typed<xpu, 1, index_t>(Shape1(2*ts), s);
+    Tensor<cpu, 1, index_t> ta(reinterpret_cast<index_t*>(inshape.shape_),
+                               Shape1(ts), ctx.get_stream<cpu>());
+    Tensor<xpu, 1, index_t> ti(reinterpret_cast<index_t*>(tsp.dptr_),
+                               Shape1(ts), ctx.get_stream<xpu>());
+    mshadow::Copy(ti, ta, ctx.get_stream<xpu>());
+
+    mshadow::Shape<NDim> outshape;
+    for (count = 0; count < ts; count++) {
+      outshape[count] = static_cast<index_t>((out_data.shape_)[count]);
+    }
+    index_t* wcp = tsp.dptr_;
+    wcp += ts;
+    Tensor<cpu, 1, index_t> tb(reinterpret_cast<index_t*>(outshape.shape_),
+                               Shape1(ts), ctx.get_stream<cpu>());
+    Tensor<xpu, 1, index_t> to(reinterpret_cast<index_t*>(wcp), Shape1(ts),
+                               ctx.get_stream<xpu>());
+    mshadow::Copy(to, tb, ctx.get_stream<xpu>());
+    const NumpyPadParam& param = nnvm::get<NumpyPadParam>(attrs.parsed);
+
+    index_t* wt = reinterpret_cast<index_t*>(to.dptr_);
+    index_t* wi = reinterpret_cast<index_t*>(ti.dptr_);
+
+    NumpyPadOpImpl<xpu, false, index_t>(in_data, out_data, wi,
+                               wt, out_data.Size(), param, req, s);
 
 Review comment:
   alignment here too

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 merged pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 merged pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r380465021
 
 

 ##########
 File path: tests/python/unittest/test_numpy_op.py
 ##########
 @@ -6787,6 +6787,63 @@ def hybrid_forward(self,F,a):
             assert_almost_equal(mx_out.asnumpy(), np_out, rtol=rtol, atol=atol)
 
 
+@with_seed()
+@use_np
+def test_np_pad():
+    class TestPad(HybridBlock):
+        def __init__(self, pad_width, mode='constant'):
+            super(TestPad,self).__init__()
+            self._pad_width = pad_width
+            self._mode = mode
+        def hybrid_forward(self,F,A,**kwargs):
+            return F.np.pad(A, self._pad_width, mode=self._mode, **kwargs)
 
 Review comment:
   better add an extra blank line below.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367143959
 
 

 ##########
 File path: python/mxnet/ndarray/numpy/_op.py
 ##########
 @@ -6476,3 +6475,126 @@ def bincount(x, weights=None, minlength=0):
     if weights is None:
         return _npi.bincount(x, minlength=minlength, has_weights=False)
     return _npi.bincount(x, weights=weights, minlength=minlength, has_weights=True)
+
+
+@set_module('mxnet.ndarray.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", constant_values=0):
+    """
+    Pad an array.
+    Parameters
 
 Review comment:
   extra blank line above

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r381000527
 
 

 ##########
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##########
 @@ -6414,4 +6414,129 @@ def bincount(x, weights=None, minlength=0):
     return _npi.bincount(x, weights=weights, minlength=minlength, has_weights=True)
 
 
+@set_module('mxnet.symbol.numpy')
+def pad(x, pad_width, mode='constant', **kwargs): # pylint: disable=too-many-arguments
+    """
+    Pad an array.
+
+    Parameters
+    ----------
+    array : array_like of rank N
+        The array to pad.
+    pad_width : {sequence, array_like, int}
+        Number of values padded to the edges of each axis.
+        ((before_1, after_1), ... (before_N, after_N)) unique pad widths
+        for each axis.
+        ((before, after),) yields same before and after pad for each axis.
+        (pad,) or int is a shortcut for before = after = pad width for all
+        axes.
+    mode : str or function, optional
+        One of the following string values or a user supplied function.
+        'constant' (default)
+            Pads with a constant value.
+        'edge'
+            Pads with the edge values of array.
+        'linear_ramp'
+            not supported yet
+        'maximum'
+            Pads with the maximum value of all of the
+            vector along each axis.
+        'mean'
+            not supported yet
+        'median'
+            not supported yet
+        'minimum'
+            Pads with the minimum value of all of the
+            vector along each axis.
+        'reflect'
+            Pads with the reflection of the vector mirrored on
+            the first and last values of the vector along each
+            axis.
+        'symmetric'
+            Pads with the reflection of the vector mirrored
+            along the edge of the array.
+        'wrap'
+            not supported yet.
+        'empty'
+            not supported yet.
+        <function>
+            not supported yet.
+    stat_length : not supported yet
+    constant_values : scalar, optional
+        Used in 'constant'.  The values to set the padded values for each
+        axis.
+        Default is 0.
+
+    end_values : not supported yet
+    reflect_type : {'even', 'odd'}, optional
+        only support even now
+
+    Returns
+    -------
+    pad : ndarray
+        Padded array of rank equal to `array` with shape increased
+        according to `pad_width`.
+    """
+    # pylint: disable = too-many-return-statements, inconsistent-return-statements
+    if not isinstance(pad_width, tuple):
+        raise TypeError("`pad_width` must be tuple.")
+    if mode == "linear_ramp":
+        raise ValueError("mode {'linear_ramp'} is not supported.")
+    if mode == "wrap":
+        raise ValueError("mode {'wrap'} is not supported.")
+    if mode == "median":
+        raise ValueError("mode {'median'} is not supported.")
+    if mode == "mean":
+        raise ValueError("mode {'mean'} is not supported.")
+    if mode == "empty":
+        raise ValueError("mode {'empty'} is not supported.")
+    if callable(mode):
+        raise ValueError("mode {'<function>'} is not supported.")
+    allowed_kwargs = {
+        'empty': [], 'edge': [], 'wrap': [],
+        'constant': ['constant_values'],
+        'linear_ramp': ['end_values'],
+        'maximum': ['stat_length'],
+        'mean': ['stat_length'],
+        'median': ['stat_length'],
+        'minimum': ['stat_length'],
+        'reflect': ['reflect_type'],
+        'symmetric': ['reflect_type'],
+    }
+    try:
+        unsupported_kwargs = set(kwargs) - set(allowed_kwargs[mode])
+    except KeyError:
+        raise ValueError("mode '{}' is not supported".format(mode))
+    if unsupported_kwargs:
+        raise ValueError("unsupported keyword arguments for mode '{}': {}"
+                         .format(mode, unsupported_kwargs))
+    if mode == "constant":
+        values = kwargs.get("constant_values", 0)
+        if isinstance(values, tuple):
+            raise TypeError("unsupported constant_values type: {'tuple'}.")
+        _npi.pad(x, pad_width, mode='constant', constant_value=values)
+    elif mode == "symmetric":
+        values = kwargs.get("reflect_type", "even")
+        if values != "even" and values is not None:
+            raise ValueError("unsupported reflect_type '{}'".format(values))
+        return _npi.pad(x, pad_width, mode='symmetric', reflect_type="even")
+    elif mode == "edge":
+        return _npi.pad(x, pad_width, mode='edge')
+    elif mode == "reflect":
+        values = kwargs.get("reflect_type", "even")
+        if values != "even" and values is not None:
+            raise ValueError("unsupported reflect_type '{}'".format(values))
+        return _npi.pad(x, pad_width, mode='reflect', reflect_type="even")
+    elif mode == "maximum":
+        values = kwargs.get("stat_length", None)
+        if values is not None:
+            raise ValueError("unsupported stat_length '{}'".format(values))
+        return _npi.pad(x, pad_width, mode='maximum')
+    elif mode == "minimum":
+        values = kwargs.get("stat_length", None)
+        if values is not None:
+            raise ValueError("unsupported stat_length '{}'".format(values))
+        return _npi.pad(x, pad_width, mode='minimum')
+    return _npi.pad(x, pad_width, mode='constant', constant_value=0)
+
 
 Review comment:
   one more blank line below

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367145836
 
 

 ##########
 File path: tests/python/unittest/test_numpy_op.py
 ##########
 @@ -5597,6 +5597,59 @@ def hybrid_forward(self,F,a):
             assert_almost_equal(mx_out.asnumpy(), np_out, rtol=rtol, atol=atol)
 
 
+@with_seed()
+@use_np
+def test_np_pad():
+    class TestPad(HybridBlock):
+        def __init__(self, pad_width = (), mode="constant", reflect_type="even", constant_values=0):
+            super(TestPad,self).__init__()
+            self._pad_width = pad_width
+            self._mode = mode
+            self._reflect_type =reflect_type
+            self._constant_values = constant_values
+        def hybrid_forward(self,F,A):
+            return F.np.pad(A, self._pad_width, mode=self._mode, reflect_type=self._reflect_type, constant_values=self._constant_values)
+    shapes = [(1,5), (2,2), (2,2), (3,3), (2,3), (3,4,5)]
+    dtypes = [np.int8, np.uint8, np.int32, np.int64, np.float16, np.float32, np.float64]
+    mode = ['constant', 'reflect', 'symmetric', 'edge', 'minimum']
+    for hybridize, shape, dtype, in itertools.product([False,True], shapes, dtypes):
+        rtol = 1e-2 if dtype == np.float16 else 1e-3
+        atol = 1e-4 if dtype == np.float16 else 1e-5
+
+        for m in mode:
+            x = np.random.uniform(-1.0, 1.0, size = shape).astype(dtype)
+            pw = ()
+            if (type(shape) == int):
+                pw += (2,3)
+            else:
+                for i in range(len(shape)):
+                    pw += ((2,3),)
+            test_pad = TestPad(pw, m, "even", 0)
+            if hybridize:
+                test_pad.hybridize()
+            x.attach_grad()
+
+            if(m != 'constant'):
+                np_out = _np.pad(x.asnumpy(), pw, mode=m)
+            else:
+                np_out = _np.pad(x.asnumpy(), pw, mode=m, constant_values=0)
+            with mx.autograd.record():
+                mx_out = test_pad(x)
+
+            # Code to get the reference backward value
+            assert mx_out.shape == np_out.shape
+            assert_almost_equal(mx_out.asnumpy(), np_out, rtol = rtol, atol = atol)
 
 Review comment:
   we'll also have to check the gradient since we have the backward implemented.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

Posted by GitBox <gi...@apache.org>.
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367143987
 
 

 ##########
 File path: python/mxnet/ndarray/numpy/_op.py
 ##########
 @@ -6476,3 +6475,126 @@ def bincount(x, weights=None, minlength=0):
     if weights is None:
         return _npi.bincount(x, minlength=minlength, has_weights=False)
     return _npi.bincount(x, weights=weights, minlength=minlength, has_weights=True)
+
+
+@set_module('mxnet.ndarray.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", constant_values=0):
+    """
+    Pad an array.
+    Parameters
+    ----------
+    array : array_like of rank N
+        The array to pad.
+    pad_width : {sequence, array_like, int}
+        Number of values padded to the edges of each axis.
+        ((before_1, after_1), ... (before_N, after_N)) unique pad widths
+        for each axis.
+        ((before, after),) yields same before and after pad for each axis.
+        (pad,) or int is a shortcut for before = after = pad width for all
+        axes.
+    mode : str or function, optional
+        One of the following string values or a user supplied function.
+        'constant' (default)
+            Pads with a constant value.
+        'edge'
+            Pads with the edge values of array.
+        'linear_ramp'
+            not supported yet
+        'maximum'
+            Pads with the maximum value of all of the
+            vector along each axis.
+        'mean'
+            not supported yet
+        'median'
+           not supported yet
+        'minimum'
+            Pads with the minimum value of all of the
+            vector along each axis.
+        'reflect'
+            Pads with the reflection of the vector mirrored on
+            the first and last values of the vector along each
+            axis.
+        'symmetric'
+            Pads with the reflection of the vector mirrored
+            along the edge of the array.
+        'wrap'
+            not supported yet
+        'empty'
+            Pads with undefined values.
+            .. versionadded:: 1.17
+        <function>
+            Padding function, see Notes.
+    stat_length : not supported yet
+    constant_values : scalar, optional
+        Used in 'constant'.  The values to set the padded values for each
+        axis.
+        Default is 0.
+    end_values : not supported yet
+    reflect_type : {'even', 'odd'}, optional
+        only support even now
+    Returns
 
 Review comment:
   extra blank line above

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services