You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/03/05 16:36:16 UTC

[GitHub] [tvm] mbrookhart opened a new pull request #7603: Move SimplifyConvPad to a new pass and don't enable it by default

mbrookhart opened a new pull request #7603:
URL: https://github.com/apache/tvm/pull/7603


   #7515 Caused a regression on the vulkan backend. This PR separates that optimization into a separate pass an disables it by default while we hunt for the issue in spirv.
   
   cc @tmoreau89 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #7603: Move SimplifyConvPad to a new pass and don't enable it by default

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #7603:
URL: https://github.com/apache/tvm/pull/7603#discussion_r588616587



##########
File path: src/relay/transforms/implicit_padding.cc
##########
@@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/transforms/simplify_expr.cc
+ * \brief A pass for simplifying the Relay expression.
+ */
+
+#include <tvm/relay/dataflow_matcher.h>
+#include <tvm/relay/expr.h>
+#include <tvm/relay/expr_functor.h>
+#include <tvm/relay/transform.h>
+#include <tvm/support/logging.h>
+
+#include "../op/tensor/transform.h"
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+class SimplifyPattern {

Review comment:
       Move this to `pattern_utils.h` to avoid dup




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7603: Move SimplifyConvPad to a new pass and don't enable it by default

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7603:
URL: https://github.com/apache/tvm/pull/7603#issuecomment-791877791


   Thanks @mbrookhart @csullivan 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] csullivan commented on a change in pull request #7603: Move SimplifyConvPad to a new pass and don't enable it by default

Posted by GitBox <gi...@apache.org>.
csullivan commented on a change in pull request #7603:
URL: https://github.com/apache/tvm/pull/7603#discussion_r588493674



##########
File path: python/tvm/relay/transform/transform.py
##########
@@ -1099,6 +1099,19 @@ def SimplifyExpr():
     return _ffi_api.SimplifyExpr()
 
 
+def ImplicitPadding():

Review comment:
       If we build up multiple patterns that can fuse implicit padding it may be useful to key on/off certain patterns based on operator type via a white or blacklist.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #7603: Move SimplifyConvPad to a new pass and don't enable it by default

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #7603:
URL: https://github.com/apache/tvm/pull/7603#discussion_r588617786



##########
File path: src/relay/transforms/implicit_padding.cc
##########
@@ -0,0 +1,211 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/transforms/simplify_expr.cc
+ * \brief A pass for simplifying the Relay expression.
+ */
+
+#include <tvm/relay/dataflow_matcher.h>
+#include <tvm/relay/expr.h>
+#include <tvm/relay/expr_functor.h>
+#include <tvm/relay/transform.h>
+#include <tvm/support/logging.h>
+
+#include "../op/tensor/transform.h"
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+class SimplifyPattern {
+ public:
+  virtual Expr callback(const Expr& pre, const Expr& post,
+                        const Map<DFPattern, Array<Expr>>& node_map) const = 0;
+
+  DFPattern pattern() const { return pattern_; }
+
+ protected:
+  /*! \brief Pattern for rewriting */
+  DFPattern pattern_;
+};
+/*!
+ * \brief SimplifyConvPad matches a pad followed by a conv/convtranspose/pool/etc
+ * with a pad attribute and merges the padding into the kernel.
+ */
+class SimplifyConvPad : public SimplifyPattern {
+ public:
+  SimplifyConvPad() {
+    x_ = IsWildcard();
+    w_ = IsWildcard();
+    pad_ = IsOp("nn.pad")({x_});
+    conv1d_ = IsOp("nn.conv1d");
+    conv2d_ = IsOp("nn.conv2d");
+    conv3d_ = IsOp("nn.conv3d");
+    conv_ = (conv1d_ || conv2d_ || conv3d_)({pad_, w_});
+    pattern_ = conv_;
+  }
+  template <typename T>
+  Attrs MakeConvAttrs(const T* old_attrs, const Array<PrimExpr> padding) const {
+    ICHECK(old_attrs);
+    ICHECK(padding.size() == old_attrs->padding.size())
+        << "Number of dimensions to pad and convolution padding attributes should have the same "
+           "extent";
+
+    auto new_attrs = make_object<T>();
+    Array<PrimExpr> combined_padding;
+    for (size_t i = 0; i < padding.size(); ++i) {
+      combined_padding.push_back(padding[i] + old_attrs->padding[i]);
+    }
+    new_attrs->strides = old_attrs->strides;
+    new_attrs->padding = combined_padding;
+    new_attrs->dilation = old_attrs->dilation;
+    new_attrs->groups = old_attrs->groups;
+    new_attrs->channels = old_attrs->channels;
+    new_attrs->kernel_size = old_attrs->kernel_size;
+    new_attrs->data_layout = old_attrs->data_layout;
+    new_attrs->kernel_layout = old_attrs->kernel_layout;
+    new_attrs->out_layout = old_attrs->out_layout;
+    new_attrs->out_dtype = old_attrs->out_dtype;
+    return Attrs(new_attrs);
+  }
+  template <typename T>
+  Attrs GetAttrs(const PadAttrs* param, const T* attrs) const {
+    ICHECK(param);
+    ICHECK(attrs);
+    ICHECK(attrs->data_layout.size() == param->pad_width.size())
+        << "Data Layout and padding attributes should have the same extent";
+
+    std::string data_layout = attrs->data_layout;
+    std::set<char> image_dims({'H', 'W', 'D'});
+    Array<PrimExpr> padding;
+    // If we're padding a non-spatial dimension, don't simplify
+    // Convolution can only pad on spatial axes
+    for (size_t i = 0; i < param->pad_width.size(); ++i) {
+      if (!image_dims.count(data_layout[i])) {
+        for (size_t j = 0; j < param->pad_width[i].size(); ++j) {
+          if (param->pad_width[i][j] != 0) {
+            return Attrs();
+          }
+        }
+      }
+    }
+    for (size_t j = 0; j < param->pad_width[0].size(); ++j) {
+      for (size_t i = 0; i < param->pad_width.size(); ++i) {
+        if (image_dims.count(data_layout[i])) {
+          padding.push_back(param->pad_width[i][j]);
+        }
+      }
+    }
+
+    return MakeConvAttrs(attrs, padding);
+  }
+  Expr callback(const Expr& pre, const Expr& post,
+                const Map<DFPattern, Array<Expr>>& node_map) const override {
+    const CallNode* call_node = post.as<CallNode>();
+    ICHECK(call_node);
+    auto pad = node_map[pad_][0];
+    const CallNode* pad_node = pad.as<CallNode>();
+    ICHECK(pad_node);
+    const PadAttrs* param = pad_node->attrs.as<PadAttrs>();
+    ICHECK(param);
+    if (param->pad_mode == "constant" && param->pad_value == 0.0) {
+      Attrs attrs;
+      if (node_map.count(conv1d_)) {
+        attrs = GetAttrs(param, call_node->attrs.as<Conv1DAttrs>());
+      } else if (node_map.count(conv2d_)) {
+        attrs = GetAttrs(param, call_node->attrs.as<Conv2DAttrs>());
+      } else if (node_map.count(conv3d_)) {
+        attrs = GetAttrs(param, call_node->attrs.as<Conv3DAttrs>());
+      } else {
+        return post;
+      }
+      if (!attrs.defined()) {
+        return post;
+      }
+      auto x = node_map[x_][0];
+      auto w = node_map[w_][0];
+      return Call(call_node->op, {x, w}, attrs, call_node->type_args, call_node->span);
+    }
+    return post;
+  }
+
+ private:
+  /*! \brief Pattern input */
+  DFPattern x_;
+  /*! \brief Pattern input weight */
+  DFPattern w_;
+  /*! \brief Pattern pad */
+  DFPattern pad_;
+  /*! \brief Pattern conv */
+  DFPattern conv_;
+  DFPattern conv1d_;
+  DFPattern conv2d_;
+  DFPattern conv3d_;
+};
+
+class SimplifyExplicitPadding {
+ public:
+  explicit SimplifyExplicitPadding(IRModule mod) : mod_(mod) {
+    CreateCallback(SimplifyConvPad());
+    // TODO(mbrookhart): ConvTranspose(Pad(x)), Pool(Pad(x))
+  }
+  template <typename T>
+  void CreateCallback(const T& pattern) {
+    auto func = [pattern](TVMArgs args, TVMRetValue* rv) {
+      Expr pre = args[0];
+      Expr post = args[1];
+      Map<DFPattern, Array<Expr>> node_map = args[2];
+      *rv = pattern.callback(pre, post, node_map);
+    };
+    callbacks_.push_back(DFPatternCallback(pattern.pattern(), PackedFunc(func), true));
+  }
+
+  Expr Simplify(const Expr& expr) { return RewritePatterns(callbacks_, expr, mod_); }
+
+ private:
+  IRModule mod_;
+  /*! \brief Callbacks for expr simplification */
+  Array<DFPatternCallback> callbacks_;
+};
+
+/*!
+ * \brief ImplicitPadding finds explict padding before an op that can
+ * support implicit padding and fuses them.
+ */
+Expr ImplicitPadding(const Expr& expr, const IRModule& mod) {
+  return SimplifyExplicitPadding(mod).Simplify(expr);
+}
+
+namespace transform {
+
+Pass ImplicitPadding() {
+  runtime::TypedPackedFunc<Function(Function, IRModule, PassContext)> pass_func =

Review comment:
       I think pass names generally follow `VerbSomething`. How about `FoldPadding` etc? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi merged pull request #7603: Move SimplifyConvPad to a new pass and don't enable it by default

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #7603:
URL: https://github.com/apache/tvm/pull/7603


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org