You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/12/22 20:02:41 UTC

[GitHub] [tvm] masahi opened a new pull request #9795: [CUTLASS] Conv2d activation fusion, part 2: Sigmoid fp16, SiLU and HardSwish

masahi opened a new pull request #9795:
URL: https://github.com/apache/tvm/pull/9795


   @comaniac @Laurawly 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #9795: [CUTLASS] Conv2d activation fusion, part 2: Sigmoid fp16, SiLU and HardSwish

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #9795:
URL: https://github.com/apache/tvm/pull/9795#discussion_r774143427



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -1735,14 +1735,19 @@ def pad(inputs, input_types):
             paddings = [paddings[i : i + 2] for i in range(0, len(paddings), 2)]
 
             const_paddings = []
+            non_zero_found = False
             for pad in paddings:
                 const_paddings.append([])
                 for p in pad:
                     if not isinstance(p, int):
                         p = int(_infer_value(p, {}).numpy())
                     const_paddings[-1].append(p)
+                    if p != 0:
+                        non_zero_found = True
 
-            if mode == "constant":
+            if not non_zero_found:
+                return data

Review comment:
       This is a minor optimization but it non-trivially helped performance on the DETR model. @comaniac 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi merged pull request #9795: [CUTLASS] Conv2d activation fusion, part 2: Sigmoid fp16, SiLU and HardSwish

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #9795:
URL: https://github.com/apache/tvm/pull/9795


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #9795: [CUTLASS] Conv2d activation fusion, part 2: Sigmoid fp16, SiLU and HardSwish

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #9795:
URL: https://github.com/apache/tvm/pull/9795#discussion_r774145206



##########
File path: src/relay/op/dyn/tensor/transform.cc
##########
@@ -467,6 +467,9 @@ bool StridedSliceRel(const Array<Type>& types, int num_inputs, const Attrs& attr
   int64_t num_axis = dshape.size();
 
   const auto* begin = types[1].as<TensorTypeNode>();
+  if (begin == nullptr) {
+    return false;
+  }

Review comment:
       This and the change below in `src/relay/op/tensor/transform.cc` are the fix for the type inference issue mentioned in "Known issues" section of https://github.com/apache/tvm/pull/9746
   
   No test is added because it is hard to reproduce on a simple test case and the change is trivial.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac commented on a change in pull request #9795: [CUTLASS] Conv2d activation fusion, part 2: Sigmoid fp16, SiLU and HardSwish

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #9795:
URL: https://github.com/apache/tvm/pull/9795#discussion_r774236908



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -1735,14 +1735,19 @@ def pad(inputs, input_types):
             paddings = [paddings[i : i + 2] for i in range(0, len(paddings), 2)]
 
             const_paddings = []
+            non_zero_found = False
             for pad in paddings:
                 const_paddings.append([])
                 for p in pad:
                     if not isinstance(p, int):
                         p = int(_infer_value(p, {}).numpy())
                     const_paddings[-1].append(p)
+                    if p != 0:
+                        non_zero_found = True
 
-            if mode == "constant":
+            if not non_zero_found:
+                return data

Review comment:
       Hmm interesting. I didn't notice that we may have pad ops that actually pad nothing.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org