You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/09/02 07:50:47 UTC

[GitHub] [incubator-tvm] masahi commented on a change in pull request #6374: [Torch] Support logsumexp, clean up unnecessary infer_shape usage

masahi commented on a change in pull request #6374:
URL: https://github.com/apache/incubator-tvm/pull/6374#discussion_r481849518



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -110,12 +102,20 @@ def inplace_add_to_add(op_name):
     if len(intersect) > 0 and intersect != set(["aten::add"]):
         return True
 
-    if is_used_by_list_add(filter(lambda use: use.user.kind() != "prim::Loop", uses)):
-        return True
+    # if add op outputs list, it is dynamic so we need to construct List ADT
+    for use in filter(lambda use: use.user.kind() in ["aten::add", "aten::add_"], uses):

Review comment:
       Yes, with this PR `_should_construct_dynamic_list` runs more often. I got an error from this function when `_get_node_type` was called on a op which returns multiple output. I've cleaned up this function a bit so that `_get_node_type` will only run on `aten::add` and `aten::add_`. So in that sense it is no worse than before.
   
   I agree that this function is a bit hacky. I added this when I was trying to support Python list. It works on the test cases I was working on, but probably it is not robust.
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org