You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/10/20 19:58:25 UTC

[GitHub] [incubator-tvm] jwfromm commented on a change in pull request #6720: [RELAY] Refactor FoldConstant to skip TNonComputationalOps

jwfromm commented on a change in pull request #6720:
URL: https://github.com/apache/incubator-tvm/pull/6720#discussion_r508800391



##########
File path: src/relay/qnn/op/concatenate.cc
##########
@@ -207,6 +207,7 @@ RELAY_REGISTER_OP("qnn.concatenate")
                   "The quantization zero_point of the output tensor.")
     .set_support_level(11)
     .add_type_rel("QnnConcatenate", QnnConcatenateRel)
+    .set_attr<TNonComputational>("TNonComputational", true)

Review comment:
       Where is `TNonComputational` defined? I don't see it in this PR.

##########
File path: src/relay/transforms/fold_constant.cc
##########
@@ -151,9 +151,12 @@ class ConstantFolder : public MixedModeMutator {
     }
 
     // We should think about potentially constant evaluation over these ops too.
-    if (call->op == invoke_tvm_op_ || call->op == shape_func_op_ || call->op == alloc_tensor_op_ ||
-        call->op == alloc_storage_op_ || call->op == device_copy_op_) {

Review comment:
       Do we need to add all these ops as non-computational? It looks like with this change things like `shape_func_op_` will now be handled differently.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org