You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/09/11 01:59:27 UTC

[GitHub] [incubator-tvm] kevinthesun opened a new pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

kevinthesun opened a new pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449


   Some necessary improvements for pytorch od models.
   
   @zhiics @yongwww @masahi 
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-693007551


   @t-vi For now IMO it doesn't bring obvious benefit of adding an extra common api to just wrap ```try ... except``` for ```_infer_value```, considering the lack of generalization among different dynamic op handlings. Also we shouldn't just limit to pytorch frontend for this topic since in tf frontend we have a lot of such patterns. Probably you can also check tf frontend to see what kind of general logic we can have for such pattern and discuss with community about the potential improvement. After that we can have a more complete solution for this. Sounds like a plan? 
   
   In general I feel this topic is a larger one to a common functionality of tvm frontend parser. In this PR probably we can more focus on pytorch specific features.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692975419


   Original pt frontend just handles limited cases, mostly static shape/attributes. It is fine we just keep input as it is. For more dynamic models, we need to do some extra work to reduce the dynamism during type inference. For example, there is a chance to reduce output shape of (?, ?, ?) to (1, ?, ?) in a dynamic op. This is necessary otherwise it's hard to ensure we are doing the right thing for backend. That error pointed out by @masahi is exactly the case. The input shape of ```get_valid_counts``` should be (1, ?, 5) while somehow recent change makes it (1, ?, ?). ```get_valid_counts``` doesn't allow dynamic box data length. This is an example why we need to make the output relay Expr as static as possible and ```_infer_value``` is necessary.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-690926938


   @kevinthesun sounds like you already have mask rcnn working :) can't wait


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] zhiics commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
zhiics commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r489041666



##########
File path: tests/python/frontend/pytorch/test_object_detection.py
##########
@@ -0,0 +1,137 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Test torch vision fasterrcnn and maskrcnn models"""
+import numpy as np
+import torch
+import torchvision
+import cv2
+
+import tvm
+
+from tvm import relay
+from tvm.runtime.vm import VirtualMachine
+from tvm.contrib.download import download
+
+
+in_size = 512
+
+
+def process_image(img):
+    img = cv2.imread(img).astype("float32")
+    img = cv2.resize(img, (in_size, in_size))
+    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+    img = torch.from_numpy(img / 255.0).permute(2, 0, 1).float()
+    img = torch.unsqueeze(img, axis=0)
+
+    return img
+
+
+def do_trace(model, inp, in_size=in_size):
+    model_trace = torch.jit.trace(model, inp)
+    model_trace.eval()
+    return model_trace
+
+
+def dict_to_tuple(out_dict):
+    if "masks" in out_dict.keys():
+        return out_dict["boxes"], out_dict["scores"], out_dict["labels"], out_dict["masks"]
+    return out_dict["boxes"], out_dict["scores"], out_dict["labels"]
+
+
+class TraceWrapper(torch.nn.Module):
+    def __init__(self, model):
+        super().__init__()
+        self.model = model
+
+    def forward(self, inp):
+        out = self.model(inp)
+        return dict_to_tuple(out[0])
+
+
+def generate_jit_model(index):
+    model_funcs = [
+        torchvision.models.detection.fasterrcnn_resnet50_fpn,
+        torchvision.models.detection.maskrcnn_resnet50_fpn,
+    ]
+
+    model_func = model_funcs[index]
+    model = TraceWrapper(model_func(pretrained=True))
+
+    model.eval()
+    inp = torch.Tensor(np.random.uniform(0.0, 250.0, size=(1, 3, in_size, in_size)))
+
+    with torch.no_grad():
+        out = model(inp)
+
+        script_module = do_trace(model, inp)
+        script_out = script_module(inp)
+
+        assert len(out[0]) > 0 and len(script_out[0]) > 0
+        return script_module
+
+
+def test_detection_models(model_index, score_threshold=0.9):
+    img = "test_street_small.jpg"
+    img_url = (
+        "https://raw.githubusercontent.com/dmlc/web-data/"
+        "master/gluoncv/detection/street_small.jpg"
+    )
+    download(img_url, img)
+
+    input_shape = (1, 3, in_size, in_size)
+    target = "llvm"
+    input_name = "input0"
+    shape_list = [(input_name, input_shape)]
+
+    scripted_model = generate_jit_model(model_index)
+    mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+    print(mod["main"])

Review comment:
       remove it?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-693093647


   @masahi Yeah. Those ops look like coming from scripted model. I believe for pt 1.6 if we trace the model there are 2 or 3 ops missing.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487256873



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "int32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The try except block is mainly for _infer_value. Currently there is no very secure way to try _infer_value with explicit error types. That's why a general Exception is used here.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       ```%11 : int = aten::size(%img.1, %10)``` generates int32 but ```%im_h : Long() = prim::NumToTensor(%11)``` automatically converts it to int64, without any hint. When we converting ```prim::NumToTenso```, we can just follow the input type which is int32 here since there is no any other information. So this is about the weird behavior of ```prim::NumToTenso``` rather than indexing. I'm not sure how many other ops in pytorch has such behavior, but it looks like inferring actual input type in ```_pytorch_promote_types``` would fix these kind of issues.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Sure. I can do what I did for arange. It's checking whether input is type _expr.Expr.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       ```if isinstance(inputs[3], _expr.Expr):```

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Same. These try except blocks are necessary to handle dynamic operators.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       Use int64 now.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692985102


   As I have discussed with @masahi, the problem of having a try interface is that there is no common logic between different dynamic ops when dealing with dynamic attributes. We need to take different actions in try/except block depending on the actual op. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] zhiics edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
zhiics edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692367505


   Thanks for the efforts and discussions. @kevinthesun could you please summarize the solutions/decisions to align with @masahi and @t-vi so that we can move forward? 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487298963



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       ```if isinstance(inputs[3], _expr.Expr):```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-693013272


   @kevinthesun Although having a wrapper api doesn't make the code shorter, I do believe there is a value in isolating such "toxic" programming idiom into one place (e.g. it makes updating easier).
   
   That said, I agree that we can proceed with this code as it is, given that this idiom is (unfortunately) already common in other frontends and such low level detail is not the point of this PR. I can give a try at this issue later, since I want to keep PyTorch frontend clean.   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r489087457



##########
File path: tests/python/frontend/pytorch/test_object_detection.py
##########
@@ -0,0 +1,136 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Test torch vision fasterrcnn and maskrcnn models"""
+import numpy as np
+import torch
+import torchvision
+import cv2
+
+import tvm
+
+from tvm import relay
+from tvm.runtime.vm import VirtualMachine
+from tvm.contrib.download import download
+
+
+in_size = 512
+
+
+def process_image(img):
+    img = cv2.imread(img).astype("float32")
+    img = cv2.resize(img, (in_size, in_size))
+    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+    img = torch.from_numpy(img / 255.0).permute(2, 0, 1).float()
+    img = torch.unsqueeze(img, axis=0)
+
+    return img
+
+
+def do_trace(model, inp, in_size=in_size):
+    model_trace = torch.jit.trace(model, inp)
+    model_trace.eval()
+    return model_trace
+
+
+def dict_to_tuple(out_dict):
+    if "masks" in out_dict.keys():
+        return out_dict["boxes"], out_dict["scores"], out_dict["labels"], out_dict["masks"]
+    return out_dict["boxes"], out_dict["scores"], out_dict["labels"]
+
+
+class TraceWrapper(torch.nn.Module):
+    def __init__(self, model):
+        super().__init__()
+        self.model = model
+
+    def forward(self, inp):
+        out = self.model(inp)
+        return dict_to_tuple(out[0])
+
+
+def generate_jit_model(index):
+    model_funcs = [
+        torchvision.models.detection.fasterrcnn_resnet50_fpn,
+        torchvision.models.detection.maskrcnn_resnet50_fpn,
+    ]
+
+    model_func = model_funcs[index]
+    model = TraceWrapper(model_func(pretrained=True))
+
+    model.eval()
+    inp = torch.Tensor(np.random.uniform(0.0, 250.0, size=(1, 3, in_size, in_size)))
+
+    with torch.no_grad():
+        out = model(inp)
+
+        script_module = do_trace(model, inp)
+        script_out = script_module(inp)
+
+        assert len(out[0]) > 0 and len(script_out[0]) > 0
+        return script_module
+
+
+def test_detection_models(model_index, score_threshold=0.9):

Review comment:
       pytest doesn't run the main function of `test_forward.py` (that's why we don't get error on CI when we have typo introduced recently in `lstm_test`, which should be `test_lstm` fixed by you in this PR). Try running `pytest -k detection`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487322957



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       Use int64 now.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692441507


   @kevinthesun Thanks I'm trying running e2e on my end. How long does it take to compile faster or mask rcnn from torchvision? I remember hearing TF faster rcnn taking 20 min to compile. If it is too slow, it might not be a good idea to run them on CI...


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r489105197



##########
File path: tests/python/frontend/pytorch/test_object_detection.py
##########
@@ -0,0 +1,136 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Test torch vision fasterrcnn and maskrcnn models"""
+import numpy as np
+import torch
+import torchvision
+import cv2
+
+import tvm
+
+from tvm import relay
+from tvm.runtime.vm import VirtualMachine
+from tvm.contrib.download import download
+
+
+in_size = 512
+
+
+def process_image(img):
+    img = cv2.imread(img).astype("float32")
+    img = cv2.resize(img, (in_size, in_size))
+    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+    img = torch.from_numpy(img / 255.0).permute(2, 0, 1).float()
+    img = torch.unsqueeze(img, axis=0)
+
+    return img
+
+
+def do_trace(model, inp, in_size=in_size):
+    model_trace = torch.jit.trace(model, inp)
+    model_trace.eval()
+    return model_trace
+
+
+def dict_to_tuple(out_dict):
+    if "masks" in out_dict.keys():
+        return out_dict["boxes"], out_dict["scores"], out_dict["labels"], out_dict["masks"]
+    return out_dict["boxes"], out_dict["scores"], out_dict["labels"]
+
+
+class TraceWrapper(torch.nn.Module):
+    def __init__(self, model):
+        super().__init__()
+        self.model = model
+
+    def forward(self, inp):
+        out = self.model(inp)
+        return dict_to_tuple(out[0])
+
+
+def generate_jit_model(index):
+    model_funcs = [
+        torchvision.models.detection.fasterrcnn_resnet50_fpn,
+        torchvision.models.detection.maskrcnn_resnet50_fpn,
+    ]
+
+    model_func = model_funcs[index]
+    model = TraceWrapper(model_func(pretrained=True))
+
+    model.eval()
+    inp = torch.Tensor(np.random.uniform(0.0, 250.0, size=(1, 3, in_size, in_size)))
+
+    with torch.no_grad():
+        out = model(inp)
+
+        script_module = do_trace(model, inp)
+        script_out = script_module(inp)
+
+        assert len(out[0]) > 0 and len(script_out[0]) > 0
+        return script_module
+
+
+def test_detection_models(model_index, score_threshold=0.9):
+    img = "test_street_small.jpg"
+    img_url = (
+        "https://raw.githubusercontent.com/dmlc/web-data/"
+        "master/gluoncv/detection/street_small.jpg"
+    )
+    download(img_url, img)
+
+    input_shape = (1, 3, in_size, in_size)
+    target = "llvm"
+    input_name = "input0"
+    shape_list = [(input_name, input_shape)]
+
+    scripted_model = generate_jit_model(model_index)
+    mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+
+    with tvm.transform.PassContext(opt_level=3, disabled_pass=["FoldScaleAxis"]):
+        vm_exec = relay.vm.compile(mod, target=target, params=params)
+
+    ctx = tvm.cpu()
+    vm = VirtualMachine(vm_exec, ctx)
+    data = process_image(img)
+    pt_res = scripted_model(data)
+    data = data.detach().numpy()
+    vm.set_input("main", **{input_name: data})
+    tvm_res = vm.run()
+
+    # Note: due to accumulated numerical error, we can't directly compare results
+    # with pytorch output. Some boxes might have a quite tiny difference in score
+    # and the order can become different. We just measure how many valid boxes
+    # there are for input image.
+    pt_scores = pt_res[1].detach().numpy().tolist()
+    tvm_scores = tvm_res[1].asnumpy().tolist()
+    num_pt_valid_scores = num_tvm_valid_scores = 0
+

Review comment:
       Yeah. Ideally in this case we should test against testing data set to calculate MAP. We have tested against coco data set and the accuracy is fine.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-693096782


   > @masahi Yeah. Those ops look like coming from scripted model. I believe for pt 1.6 if we trace the model there are 2 or 3 ops missing.
   
   To be clear, that list op is coming from **tracing** mask rcnn model, since mask rcnn is partly scripted, even if we `torch.jit.trace` on it, we still get partly scripted TorchScript IR. See the link to torchvision code I added above.
   
   For faster rcnn, which is **not** partly scripted and thus can be traced completely, I get following missing ops with PyTorch 1.6
   ```
   NotImplementedError: The following operators are not implemented: ['aten::tensor', 'aten::empty', 'aten::numel']
   
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi removed a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi removed a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692436699


   @kevinthesun I get this error from faster rcnn and mask rcnn. Is this expected?
   
   ```
     File "/mnt/2e797a66-fd2b-44fc-a3ba-24d7d65f2780/projects/dev/tvm/src/te/schedule/schedule_postproc_to_primfunc.cc", line 131
   TVMError: Check failed: allow_alloc: Cannot find the Realization point of tensor Tensor(shape=[1], op.name=box_data_length)
   During handling of the above exception, another exception occurred:
   
   TVMError: Check failed: allow_alloc: Cannot find the Realization point of tensor Tensor(shape=[1], op.name=box_data_length)
   Error during compile function
   -----------------------------
   #[version = "0.0.5"]
   fn (%p0: Tensor[(1, ?, ?), float32], Primitive=1) -> (Tensor[(1), int32], Tensor[(1, ?, ?), float32], Tensor[(1, ?), int32]) {
     vision.get_valid_counts(%p0, meta[relay.attrs.GetValidCountsAttrs][0]) /* ty=(Tensor[(1), int32], Tensor[(1, ?, ?), float32], Tensor[(1, ?), int32]) */
   }
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-690898948






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
t-vi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487440253



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       I'd have a strong preference for that, yeah.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692365187


   E2E tests added. Now waiting for https://github.com/apache/incubator-tvm/pull/6464.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-693007551


   @t-vi For now IMO it doesn't bring obvious benefit of adding an extra common api to just wrap ```try ... except``` for ```_infer_value```, considering the lack of generalization among different dynamic op handlings. Also we shouldn't just limit to pytorch frontend for this topic since in tf frontend we have a lot of such patterns. Probably you can also check tf frontend to see what kind of general logic we can have for such pattern and discuss with community about the potential improvement. After that we can have a more complete solution for this. Sounds like a plan? 
   
   In general I feel this topic is a larger one to a common functionality of tvm frontend parser. In this PR probably we can more focus on pytorch specific features, and have a separate discussion about how to better handle dynamic ops in frontend.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-690855865






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
t-vi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692548751


   @zhiics @kevinthesun @masahi 
   Thank you, @kevinthesun for your summary and all the work in the investigation and your PR.
   
   I think using `if isinstance(..., _expr.Expr):` would be very much preferable to using exceptions.
   
   1. I see the uses of `try: ... except: ...` as using exceptions for regular control flow (because the error case is the one where the old normal logic is applicable and so we should have a clear view when the new logic is applicable and when not) and it would then except on the old cases,
   2. Not using `try: ... except: ...` for regular control flow seems like good programming fundamentals to me. It would seem odd if TVM as a compiler stack would not strive to follow best practice here.
   
   Neither am I entirely sure whether 1. is contentious or not and to me it would seem that a PR is an odd place to form an opinion on 2. At the same time I see the construct as problematic enough to have a really hard time liking the current state of the PR. It would bring great joy if you could be convinced to move it to `if`.
   
   I should emphasize that I'm entirely for having the new functions appreciate your @kevinthesun work on this. Thank you!
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
t-vi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487268983



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Still, I would prefer looking at what the type of `inputs[1]` is and have an `if`. We should at least know which types are good to leave as is (the current except block).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r488360061



##########
File path: tests/python/frontend/pytorch/test_object_detection.py
##########
@@ -0,0 +1,131 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Test torch vision fasterrcnn and maskrcnn models"""
+import torch
+import torchvision
+import cv2
+
+import tvm
+
+from tvm import relay
+from tvm.runtime.vm import VirtualMachine
+from tvm.contrib.download import download
+
+
+in_size = 512
+
+def process_image(img):
+    img = cv2.imread(img).astype("float32")
+    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+    img = torch.from_numpy(img/255.).permute(2,0,1).float()
+    img = torch.unsqueeze(img, axis=0)
+
+    return img
+
+
+def do_trace(model, inp, in_size=in_size):
+    model_trace = torch.jit.trace(model, inp)
+    model_trace.eval()
+    return model_trace
+
+
+def dict_to_tuple(out_dict):
+    if "masks" in out_dict.keys():
+        return out_dict["boxes"], out_dict["scores"], out_dict["labels"], out_dict["masks"]
+    return out_dict["boxes"], out_dict["scores"], out_dict["labels"]
+
+
+class TraceWrapper(torch.nn.Module):
+    def __init__(self, model):
+        super().__init__()
+        self.model = model
+
+    def forward(self, inp):
+        out = self.model(inp)
+        return dict_to_tuple(out[0])
+
+
+def generate_jit_model(index, img):
+    model_funcs = [torchvision.models.detection.fasterrcnn_resnet50_fpn,
+                   torchvision.models.detection.maskrcnn_resnet50_fpn]
+
+    model_func = model_funcs[index]
+    model = TraceWrapper(model_func(pretrained=True))
+
+    model.eval()
+    inp = process_image(img)
+
+    with torch.no_grad():
+        out = model(inp)
+
+        script_module = do_trace(model, inp)
+        script_out = script_module(inp)
+
+        assert len(out[0]) > 0 and len(script_out[0]) > 0
+        torch._C._jit_pass_inline(script_module.graph)

Review comment:
       this can be removed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692285615


   > But we should still know when it is appropriate to run the inference, should we not?
   > I must admit I'm still troubled by the idea that we don't know beforehand which inputs to process with which method.
   
   We do this for some ops which have dynamic attributes. When those dynamic attributes are relay Expr, we need to try to infer value to make the generated relay program as static as possible. There is no better way to further tell which Relay Expr needs to be infered(And not necessary since _infer_value do general evaluation for Relay Expr).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r489082191



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -1141,14 +1264,14 @@ def _impl(inputs, input_types):
             bias = inputs[0]
             return _op.nn.bias_add(dense_out, bias)
         else:
-            return dense_out
+            return dense_out + _expr.const(inputs[0])

Review comment:
       @kevinthesun Given that this `def _dense()` is for converting `aten::addmm`, can you rename this converter to `_addmm` and update variable names according to https://pytorch.org/docs/stable/generated/torch.addmm.html? In particular, please remove `use_bias` as this name doesn't make any sense for `addmm` op.
   
   This code comes from the original PR, we need to remove technical debt as much as possible...




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487256873



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "int32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The try except block is mainly for _infer_value. Currently there is no very secure way to try _infer_value with explicit error types. That's why a general Exception is used here.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       ```%11 : int = aten::size(%img.1, %10)``` generates int32 but ```%im_h : Long() = prim::NumToTensor(%11)``` automatically converts it to int64, without any hint. When we converting ```prim::NumToTenso```, we can just follow the input type which is int32 here since there is no any other information. So this is about the weird behavior of ```prim::NumToTenso``` rather than indexing. I'm not sure how many other ops in pytorch has such behavior, but it looks like inferring actual input type in ```_pytorch_promote_types``` would fix these kind of issues.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Sure. I can do what I did for arange. It's checking whether input is type _expr.Expr.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       ```if isinstance(inputs[3], _expr.Expr):```

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Same. These try except blocks are necessary to handle dynamic operators.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       Use int64 now.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "int32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The try except block is mainly for _infer_value. Currently there is no very secure way to try _infer_value with explicit error types. That's why a general Exception is used here.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       ```%11 : int = aten::size(%img.1, %10)``` generates int32 but ```%im_h : Long() = prim::NumToTensor(%11)``` automatically converts it to int64, without any hint. When we converting ```prim::NumToTenso```, we can just follow the input type which is int32 here since there is no any other information. So this is about the weird behavior of ```prim::NumToTenso``` rather than indexing. I'm not sure how many other ops in pytorch has such behavior, but it looks like inferring actual input type in ```_pytorch_promote_types``` would fix these kind of issues.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Sure. I can do what I did for arange. It's checking whether input is type _expr.Expr.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       ```if isinstance(inputs[3], _expr.Expr):```

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Same. These try except blocks are necessary to handle dynamic operators.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       Use int64 now.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "int32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The try except block is mainly for _infer_value. Currently there is no very secure way to try _infer_value with explicit error types. That's why a general Exception is used here.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       ```%11 : int = aten::size(%img.1, %10)``` generates int32 but ```%im_h : Long() = prim::NumToTensor(%11)``` automatically converts it to int64, without any hint. When we converting ```prim::NumToTenso```, we can just follow the input type which is int32 here since there is no any other information. So this is about the weird behavior of ```prim::NumToTenso``` rather than indexing. I'm not sure how many other ops in pytorch has such behavior, but it looks like inferring actual input type in ```_pytorch_promote_types``` would fix these kind of issues.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Sure. I can do what I did for arange. It's checking whether input is type _expr.Expr.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       ```if isinstance(inputs[3], _expr.Expr):```

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Same. These try except blocks are necessary to handle dynamic operators.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       Use int64 now.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "int32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The try except block is mainly for _infer_value. Currently there is no very secure way to try _infer_value with explicit error types. That's why a general Exception is used here.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       ```%11 : int = aten::size(%img.1, %10)``` generates int32 but ```%im_h : Long() = prim::NumToTensor(%11)``` automatically converts it to int64, without any hint. When we converting ```prim::NumToTenso```, we can just follow the input type which is int32 here since there is no any other information. So this is about the weird behavior of ```prim::NumToTenso``` rather than indexing. I'm not sure how many other ops in pytorch has such behavior, but it looks like inferring actual input type in ```_pytorch_promote_types``` would fix these kind of issues.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Sure. I can do what I did for arange. It's checking whether input is type _expr.Expr.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       ```if isinstance(inputs[3], _expr.Expr):```

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Same. These try except blocks are necessary to handle dynamic operators.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       Use int64 now.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "int32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The try except block is mainly for _infer_value. Currently there is no very secure way to try _infer_value with explicit error types. That's why a general Exception is used here.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       ```%11 : int = aten::size(%img.1, %10)``` generates int32 but ```%im_h : Long() = prim::NumToTensor(%11)``` automatically converts it to int64, without any hint. When we converting ```prim::NumToTenso```, we can just follow the input type which is int32 here since there is no any other information. So this is about the weird behavior of ```prim::NumToTenso``` rather than indexing. I'm not sure how many other ops in pytorch has such behavior, but it looks like inferring actual input type in ```_pytorch_promote_types``` would fix these kind of issues.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Sure. I can do what I did for arange. It's checking whether input is type _expr.Expr.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       ```if isinstance(inputs[3], _expr.Expr):```

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Same. These try except blocks are necessary to handle dynamic operators.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       Use int64 now.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "int32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The try except block is mainly for _infer_value. Currently there is no very secure way to try _infer_value with explicit error types. That's why a general Exception is used here.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       ```%11 : int = aten::size(%img.1, %10)``` generates int32 but ```%im_h : Long() = prim::NumToTensor(%11)``` automatically converts it to int64, without any hint. When we converting ```prim::NumToTenso```, we can just follow the input type which is int32 here since there is no any other information. So this is about the weird behavior of ```prim::NumToTenso``` rather than indexing. I'm not sure how many other ops in pytorch has such behavior, but it looks like inferring actual input type in ```_pytorch_promote_types``` would fix these kind of issues.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Sure. I can do what I did for arange. It's checking whether input is type _expr.Expr.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       ```if isinstance(inputs[3], _expr.Expr):```

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Same. These try except blocks are necessary to handle dynamic operators.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       Use int64 now.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "int32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The try except block is mainly for _infer_value. Currently there is no very secure way to try _infer_value with explicit error types. That's why a general Exception is used here.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       ```%11 : int = aten::size(%img.1, %10)``` generates int32 but ```%im_h : Long() = prim::NumToTensor(%11)``` automatically converts it to int64, without any hint. When we converting ```prim::NumToTenso```, we can just follow the input type which is int32 here since there is no any other information. So this is about the weird behavior of ```prim::NumToTenso``` rather than indexing. I'm not sure how many other ops in pytorch has such behavior, but it looks like inferring actual input type in ```_pytorch_promote_types``` would fix these kind of issues.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Sure. I can do what I did for arange. It's checking whether input is type _expr.Expr.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       ```if isinstance(inputs[3], _expr.Expr):```

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Same. These try except blocks are necessary to handle dynamic operators.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       Use int64 now.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692441507


   @kevinthesun Thanks I'm trying running e2e on my end. I have following questions:
   
   1. How long does it take to compile faster or mask rcnn from torchvision? I remember hearing TF faster rcnn taking 20 min to compile. If it is too slow, it might not be a good idea to run them on CI...
   
   2. Interestingly, mask rcnn seems to have `prim::Loop`, even though it is traced. In general, does tracing make sense for object detection models (that might have data dependent code path)?
   
   3. I'm having this error below, is it expected?
   
   ```
   TVMError: Check failed: allow_alloc: Cannot find the Realization point of tensor Tensor(shape=[1], op.name=box_data_length)
   During handling of the above exception, another exception occurred:
   
   TVMError: Check failed: allow_alloc: Cannot find the Realization point of tensor Tensor(shape=[1], op.name=box_data_length)
   Error during compile function
   -----------------------------
   #[version = "0.0.5"]
   fn (%p0: Tensor[(1, ?, ?), float32], Primitive=1) -> (Tensor[(1), int32], Tensor[(1, ?, ?), float32], Tensor[(1, ?), int32]) {
     vision.get_valid_counts(%p0, meta[relay.attrs.GetValidCountsAttrs][0]) /* ty=(Tensor[(1), int32], Tensor[(1, ?, ?), float32], Tensor[(1, ?), int32]) */
   }
   
   ```
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r488360061



##########
File path: tests/python/frontend/pytorch/test_object_detection.py
##########
@@ -0,0 +1,131 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Test torch vision fasterrcnn and maskrcnn models"""
+import torch
+import torchvision
+import cv2
+
+import tvm
+
+from tvm import relay
+from tvm.runtime.vm import VirtualMachine
+from tvm.contrib.download import download
+
+
+in_size = 512
+
+def process_image(img):
+    img = cv2.imread(img).astype("float32")
+    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+    img = torch.from_numpy(img/255.).permute(2,0,1).float()
+    img = torch.unsqueeze(img, axis=0)
+
+    return img
+
+
+def do_trace(model, inp, in_size=in_size):
+    model_trace = torch.jit.trace(model, inp)
+    model_trace.eval()
+    return model_trace
+
+
+def dict_to_tuple(out_dict):
+    if "masks" in out_dict.keys():
+        return out_dict["boxes"], out_dict["scores"], out_dict["labels"], out_dict["masks"]
+    return out_dict["boxes"], out_dict["scores"], out_dict["labels"]
+
+
+class TraceWrapper(torch.nn.Module):
+    def __init__(self, model):
+        super().__init__()
+        self.model = model
+
+    def forward(self, inp):
+        out = self.model(inp)
+        return dict_to_tuple(out[0])
+
+
+def generate_jit_model(index, img):
+    model_funcs = [torchvision.models.detection.fasterrcnn_resnet50_fpn,
+                   torchvision.models.detection.maskrcnn_resnet50_fpn]
+
+    model_func = model_funcs[index]
+    model = TraceWrapper(model_func(pretrained=True))
+
+    model.eval()
+    inp = process_image(img)
+
+    with torch.no_grad():
+        out = model(inp)
+
+        script_module = do_trace(model, inp)
+        script_out = script_module(inp)
+
+        assert len(out[0]) > 0 and len(script_out[0]) > 0
+        torch._C._jit_pass_inline(script_module.graph)

Review comment:
       this can be removed (inlining is done inside frontend)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692989903


   @kevinthesun this mighe be related to the error https://github.com/apache/incubator-tvm/pull/6316


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692285615


   > But we should still know when it is appropriate to run the inference, should we not?
   > I must admit I'm still troubled by the idea that we don't know beforehand which inputs to process with which method.
   
   We do this for some ops which have dynamic attributes. When those dynamic attributes are relay Expr, we need to try to infer value to make the generated relay program as static as possible.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
t-vi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692944886


   I don't actually get why we need try: ... except: ... what case does it not handle that is handled properly by the except part?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487298685



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Sure. I can do what I did for arange. It's checking whether input is type _expr.Expr.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-690855865






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
t-vi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r486806914



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I must admit that I'd appreciate if there were more commentary to the typing changes here.
   - In my opinion (and I could be wrong), it would be helpful to have a view what kind of types `input_types` and `inputs` can have and have a single place where we do implicit type promotion. I had hoped `_pytorch_promote_types` could be that.
   - If `_pytorch_promote_types` doesn't do the job, maybe we can comment why it isn't. Also why is this particular apparently particular elementwise ops as opposed to amending `_pytorch_promote_types`?
   
   I know this looks like I'm asking for busywork when you're mostly interested in getting a particular to work, but I have the impression that we would want to avoid ad hoc type workarounds as much as possible if we want to avoid having subtle bugs whenever someone uses something outside what our unit tests catch.
   

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The int is not needed here?
   Also it might be worth trying to avoid `try: ... except Exception:` during non-error-processing in favour of `if isinstance(....): ... else:`.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       For which types do we want to do this (or alternatively which can go straight through)?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       int32 here and index_size limit 2**63-1 feels strange.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Here, too, maybe avoid: `try: .. except:`
   (there are more places, I didn't flag them all, but I think they should be all changed to use plain `if`).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692921717


   @masahi It looks like certain recent change causes this error. I'm investigating.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
t-vi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487271315



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I think we would eventually want to look at using type propagation more.
   However, the issue here is that PyTorch's default dtype for integral tensors is int64. I don't think we should be hacking around that, really, because we're bound to end up with cases where int64 is the right thing to have. If I understood the discussions on the forum correctly, the idea was to downcast 64 bit indexing to 32 based if it is considered safe.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692260607


   @masahi The problem of creating a try_infer_value API is that it doesn't simplify the codes since we need to do various handling in except block for different ops. We still need to check the output of try_infer_value and have a branching to decide what actions to take. In some cases we also need to do more processing in try block. There is no uniform logic for such dynamic attribute inference.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-690898948


   @masahi These changes are mainly for torch vision rcnn models which enhances current features. There is another PR adding some backend stuffs. After that I can add e2e torch vision rcnn tests into this PR  which should cover all changes.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487256873



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "int32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692285615


   > But we should still know when it is appropriate to run the inference, should we not?
   > I must admit I'm still troubled by the idea that we don't know beforehand which inputs to process with which method.
   
   We do this for some ops which have dynamic attributes. When those dynamic attributes are relay Expr, we need to try to infer value to make the generated relay program as static as possible. There is no good way to further tell which Relay Expr needs to be inferred(And not necessary since _infer_value does general evaluation for Relay Expr).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-690855865


   @kevinthesun Thanks for working on this. Can you split this into multiple PRs? In particular, besides the new op conversion, you made many non trivial changes to existing ops. Without tests for the latter changes, it is hard to tell what they are for. 
   
   We can merge the new op conversion first (as they came with tests).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487256873



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692975419


   Original pt frontend just handles limited cases, mostly static shape/attributes. It is fine we just keep input as it is for static models. For more dynamic models, we need to do some extra work to reduce the dynamism during type inference. For example, there is a chance to reduce output shape of (?, ?, ?) to (1, ?, ?) in a dynamic op. This is necessary otherwise it's hard to ensure we are doing the right thing for backend. That error pointed out by @masahi is exactly the case. The input shape of ```get_valid_counts``` should be (1, ?, 5) while somehow recent change makes it (1, ?, ?). ```get_valid_counts``` doesn't allow dynamic box data length. This is an example why we need to make the output relay Expr as static as possible and ```_infer_value``` is necessary.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-693013272


   @kevinthesun Although having a wrapper api doesn't make the code shorter, I do believe there is a value in isolating such "toxic" programming idiom into one place (e.g. it makes updating easier). I can imagine a wrapper that takes  a "success handler" and "failure handler" (both lambda functions).
   
   That said, I agree that we can proceed with this code as it is, given that this idiom is (unfortunately) already common in other frontends and such low level detail is not the point of this PR. I can give a try at this issue later, since I want to keep PyTorch frontend clean.   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-693023530


   @kevinthesun The maskrcnn test worked for me, great!
   
   But unfortunately under torch 1.6, conversion fails with the following error. Most likely they come from pytorch function that are scripted https://github.com/pytorch/vision/blob/1a04d3c265679e1a508e7cd627006aaa9ef1ccfb/torchvision/models/detection/roi_heads.py#L454. Most of them don't make sense for tracing (like raising an exception).
   
   We need to come back to this problem later when we upgrade our CI.
   
   ```
   NotImplementedError: The following operators are not implemented:[
   'aten::append',
   'aten::tensor',
   'aten::dim',
   'aten::warn',
   'aten::__is__',
   'aten::__isnot__',
   'prim::RaiseException',
   'prim::unchecked_cast',
   'aten::IntImplicit',
   'aten::empty',
   'aten::numel',
   'prim::Uninitialized',
   'aten::__contains__'
   ]
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r489063663



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -1141,14 +1264,14 @@ def _impl(inputs, input_types):
             bias = inputs[0]
             return _op.nn.bias_add(dense_out, bias)
         else:
-            return dense_out
+            return dense_out + _expr.const(inputs[0])

Review comment:
       why this is needed? This is for use_bias == False case right?
   
   UPDATE: Oh probably we could have a better name than `use_bias`... I wonder why bias is the first argument.
   
   ```
   use_bias = isinstance(inputs[0], _expr.Expr)
   ```
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692917238


   > @zhiics @kevinthesun @masahi
   > Thank you, @kevinthesun for your summary and all the work in the investigation and your PR.
   > 
   > I think using `if isinstance(..., _expr.Expr):` would be very much preferable to using exceptions.
   > 
   > 1. I see the uses of `try: ... except: ...` as using exceptions for regular control flow (because the error case is the one where the old normal logic is applicable and so we should have a clear view when the new logic is applicable and when not) and it would then except on the old cases,
   > 2. Not using `try: ... except: ...` for regular control flow seems like good programming fundamentals to me. It would seem odd if TVM as a compiler stack would not strive to follow best practice here.
   > 
   > Neither am I entirely sure whether 1. is contentious or not and to me it would seem that a PR is an odd place to form an opinion on 2. At the same time I see the construct as problematic enough to have a really hard time liking the current state of the PR. It would bring great joy if you could be convinced to move it to `if`.
   > 
   > I should emphasize that I'm entirely for having the new functions appreciate your @kevinthesun work on this. Thank you!
   
   @t-vi Thanks for your thoughts. To handle dynamic op correctly, we have to use ```if isinstance(..., _expr.Expr)``` together with ```try ... except```. Do we have an agreement that in the dynamic ops involved in this PR, ```try ... except``` is necessary? Currently there is no other way rather than try _infer_value to get constant attribute. Unfortunately this is not about good programming fundamentals but about functionality. You can see the similar methodology in TensorFlow frontend. We need to do these because tf/pt od models are in the most complicated models which TVM has ever tried to compiled. As @masahi mentioned, we can gradually move these complicated dynamic inference logics to backend while dyn namespace is improved. However, currently these are something necessary to support OD models.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r489083619



##########
File path: tests/python/frontend/pytorch/test_object_detection.py
##########
@@ -0,0 +1,136 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Test torch vision fasterrcnn and maskrcnn models"""
+import numpy as np
+import torch
+import torchvision
+import cv2
+
+import tvm
+
+from tvm import relay
+from tvm.runtime.vm import VirtualMachine
+from tvm.contrib.download import download
+
+
+in_size = 512

Review comment:
       With input size 512 it gets close to 10GB of peak RAM usage on my laptop, for CI testing it is better to make it smaller, like 300




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487260165



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The try except block is mainly for _infer_value. Currently there is no very secure way to try _infer_value with explicit error types. That's why a general Exception is used here.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692441507


   @kevinthesun Thanks I'm trying running e2e on my end. I have following questions:
   
   1. How long does it take to compile faster or mask rcnn from torchvision? I remember hearing TF faster rcnn taking 20 min to compile. If it is too slow, it might not be a good idea to run them on CI...
   
   2. Interestingly, mask rcnn seems to have `prim::Loop`, even though it is traced. In general, does tracing make sense for object detection models (that might have data dependent code path)? The loop is coming from https://github.com/pytorch/vision/blob/1a04d3c265679e1a508e7cd627006aaa9ef1ccfb/torchvision/models/detection/roi_heads.py#L457, so this is a partly scripted model.
   
   3. I'm having this error below, is it expected?
   
   ```
   TVMError: Check failed: allow_alloc: Cannot find the Realization point of tensor Tensor(shape=[1], op.name=box_data_length)
   During handling of the above exception, another exception occurred:
   
   TVMError: Check failed: allow_alloc: Cannot find the Realization point of tensor Tensor(shape=[1], op.name=box_data_length)
   Error during compile function
   -----------------------------
   #[version = "0.0.5"]
   fn (%p0: Tensor[(1, ?, ?), float32], Primitive=1) -> (Tensor[(1), int32], Tensor[(1, ?, ?), float32], Tensor[(1, ?), int32]) {
     vision.get_valid_counts(%p0, meta[relay.attrs.GetValidCountsAttrs][0]) /* ty=(Tensor[(1), int32], Tensor[(1, ?, ?), float32], Tensor[(1, ?), int32]) */
   }
   
   ```
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-693007551


   @t-vi For now IMO it doesn't bring obvious benefit of adding an extra common api to just wrap ```try ... except``` for ```_infer_value```, considering the lack of generalization among different dynamic op handlings. Also we shouldn't just limit to pytorch frontend for this topic since in tf frontend we have a lot of such patterns. Probably you can also check tf frontend to see what kind of general logic we can have for such pattern and discuss with community about the potential improvement. After that we can have a more complete solution for this. Sounds like a plan? 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
t-vi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r486806914



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I must admit that I'd appreciate if there were more commentary to the typing changes here.
   - In my opinion (and I could be wrong), it would be helpful to have a view what kind of types `input_types` and `inputs` can have and have a single place where we do implicit type promotion. I had hoped `_pytorch_promote_types` could be that.
   - If `_pytorch_promote_types` doesn't do the job, maybe we can comment why it isn't. Also why is this particular apparently particular elementwise ops as opposed to amending `_pytorch_promote_types`?
   
   I know this looks like I'm asking for busywork when you're mostly interested in getting a particular to work, but I have the impression that we would want to avoid ad hoc type workarounds as much as possible if we want to avoid having subtle bugs whenever someone uses something outside what our unit tests catch.
   

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The int is not needed here?
   Also it might be worth trying to avoid `try: ... except Exception:` during non-error-processing in favour of `if isinstance(....): ... else:`.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       For which types do we want to do this (or alternatively which can go straight through)?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       int32 here and index_size limit 2**63-1 feels strange.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Here, too, maybe avoid: `try: .. except:`
   (there are more places, I didn't flag them all, but I think they should be all changed to use plain `if`).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Still, I would prefer looking at what the type of `inputs[1]` is and have an `if`. We should at least know which types are good to leave as is (the current except block).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I think we would eventually want to look at using type propagation more.
   However, the issue here is that PyTorch's default dtype for integral tensors is int64. I don't think we should be hacking around that, really, because we're bound to end up with cases where int64 is the right thing to have. If I understood the discussions on the forum correctly, the idea was to downcast 64 bit indexing to 32 based if it is considered safe.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       I'd have a strong preference for that, yeah.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I must admit that I'd appreciate if there were more commentary to the typing changes here.
   - In my opinion (and I could be wrong), it would be helpful to have a view what kind of types `input_types` and `inputs` can have and have a single place where we do implicit type promotion. I had hoped `_pytorch_promote_types` could be that.
   - If `_pytorch_promote_types` doesn't do the job, maybe we can comment why it isn't. Also why is this particular apparently particular elementwise ops as opposed to amending `_pytorch_promote_types`?
   
   I know this looks like I'm asking for busywork when you're mostly interested in getting a particular to work, but I have the impression that we would want to avoid ad hoc type workarounds as much as possible if we want to avoid having subtle bugs whenever someone uses something outside what our unit tests catch.
   

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The int is not needed here?
   Also it might be worth trying to avoid `try: ... except Exception:` during non-error-processing in favour of `if isinstance(....): ... else:`.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       For which types do we want to do this (or alternatively which can go straight through)?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       int32 here and index_size limit 2**63-1 feels strange.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Here, too, maybe avoid: `try: .. except:`
   (there are more places, I didn't flag them all, but I think they should be all changed to use plain `if`).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Still, I would prefer looking at what the type of `inputs[1]` is and have an `if`. We should at least know which types are good to leave as is (the current except block).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I think we would eventually want to look at using type propagation more.
   However, the issue here is that PyTorch's default dtype for integral tensors is int64. I don't think we should be hacking around that, really, because we're bound to end up with cases where int64 is the right thing to have. If I understood the discussions on the forum correctly, the idea was to downcast 64 bit indexing to 32 based if it is considered safe.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       I'd have a strong preference for that, yeah.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692260607


   @masahi The problem of creating a try_infer_value API is that it doesn't simplify the codes since we need to do various handling in except block for different ops. We still need to check the output of try_infer_value and have a branching to decide what actions to take.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-690846159






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-690855865


   @kevinthesun Thanks for working on this. Can you split this into multiple PRs? In particular, besides the new op conversion, you made many non trivial changes to existing ops. Without tests for the latter changes, it is hard to tell what they are for. 
   
   We can merge the new op conversion first (as they came with tests).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487300464



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Same. These try except blocks are necessary to handle dynamic operators.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692285615


   > But we should still know when it is appropriate to run the inference, should we not?
   > I must admit I'm still troubled by the idea that we don't know beforehand which inputs to process with which method.
   We do this for some ops which have dynamic attributes. When those dynamic attributes are relay Expr, we need to try to infer value to make the generated relay program as static as possible.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-693093647


   @masahi Yeah. Those ops look like coming from scripted model. I believe for pt 1.6 if we tracing the model there are 2 or 3 ops missing.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r489105197



##########
File path: tests/python/frontend/pytorch/test_object_detection.py
##########
@@ -0,0 +1,136 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Test torch vision fasterrcnn and maskrcnn models"""
+import numpy as np
+import torch
+import torchvision
+import cv2
+
+import tvm
+
+from tvm import relay
+from tvm.runtime.vm import VirtualMachine
+from tvm.contrib.download import download
+
+
+in_size = 512
+
+
+def process_image(img):
+    img = cv2.imread(img).astype("float32")
+    img = cv2.resize(img, (in_size, in_size))
+    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+    img = torch.from_numpy(img / 255.0).permute(2, 0, 1).float()
+    img = torch.unsqueeze(img, axis=0)
+
+    return img
+
+
+def do_trace(model, inp, in_size=in_size):
+    model_trace = torch.jit.trace(model, inp)
+    model_trace.eval()
+    return model_trace
+
+
+def dict_to_tuple(out_dict):
+    if "masks" in out_dict.keys():
+        return out_dict["boxes"], out_dict["scores"], out_dict["labels"], out_dict["masks"]
+    return out_dict["boxes"], out_dict["scores"], out_dict["labels"]
+
+
+class TraceWrapper(torch.nn.Module):
+    def __init__(self, model):
+        super().__init__()
+        self.model = model
+
+    def forward(self, inp):
+        out = self.model(inp)
+        return dict_to_tuple(out[0])
+
+
+def generate_jit_model(index):
+    model_funcs = [
+        torchvision.models.detection.fasterrcnn_resnet50_fpn,
+        torchvision.models.detection.maskrcnn_resnet50_fpn,
+    ]
+
+    model_func = model_funcs[index]
+    model = TraceWrapper(model_func(pretrained=True))
+
+    model.eval()
+    inp = torch.Tensor(np.random.uniform(0.0, 250.0, size=(1, 3, in_size, in_size)))
+
+    with torch.no_grad():
+        out = model(inp)
+
+        script_module = do_trace(model, inp)
+        script_out = script_module(inp)
+
+        assert len(out[0]) > 0 and len(script_out[0]) > 0
+        return script_module
+
+
+def test_detection_models(model_index, score_threshold=0.9):
+    img = "test_street_small.jpg"
+    img_url = (
+        "https://raw.githubusercontent.com/dmlc/web-data/"
+        "master/gluoncv/detection/street_small.jpg"
+    )
+    download(img_url, img)
+
+    input_shape = (1, 3, in_size, in_size)
+    target = "llvm"
+    input_name = "input0"
+    shape_list = [(input_name, input_shape)]
+
+    scripted_model = generate_jit_model(model_index)
+    mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+
+    with tvm.transform.PassContext(opt_level=3, disabled_pass=["FoldScaleAxis"]):
+        vm_exec = relay.vm.compile(mod, target=target, params=params)
+
+    ctx = tvm.cpu()
+    vm = VirtualMachine(vm_exec, ctx)
+    data = process_image(img)
+    pt_res = scripted_model(data)
+    data = data.detach().numpy()
+    vm.set_input("main", **{input_name: data})
+    tvm_res = vm.run()
+
+    # Note: due to accumulated numerical error, we can't directly compare results
+    # with pytorch output. Some boxes might have a quite tiny difference in score
+    # and the order can become different. We just measure how many valid boxes
+    # there are for input image.
+    pt_scores = pt_res[1].detach().numpy().tolist()
+    tvm_scores = tvm_res[1].asnumpy().tolist()
+    num_pt_valid_scores = num_tvm_valid_scores = 0
+

Review comment:
       Yeah. Ideally in this case we should test against testing data set. We have tested against coco data set and the accuracy is fine.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r489103955



##########
File path: tests/python/frontend/pytorch/test_object_detection.py
##########
@@ -0,0 +1,136 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Test torch vision fasterrcnn and maskrcnn models"""
+import numpy as np
+import torch
+import torchvision
+import cv2
+
+import tvm
+
+from tvm import relay
+from tvm.runtime.vm import VirtualMachine
+from tvm.contrib.download import download
+
+
+in_size = 512
+
+
+def process_image(img):
+    img = cv2.imread(img).astype("float32")
+    img = cv2.resize(img, (in_size, in_size))
+    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+    img = torch.from_numpy(img / 255.0).permute(2, 0, 1).float()
+    img = torch.unsqueeze(img, axis=0)
+
+    return img
+
+
+def do_trace(model, inp, in_size=in_size):
+    model_trace = torch.jit.trace(model, inp)
+    model_trace.eval()
+    return model_trace
+
+
+def dict_to_tuple(out_dict):
+    if "masks" in out_dict.keys():
+        return out_dict["boxes"], out_dict["scores"], out_dict["labels"], out_dict["masks"]
+    return out_dict["boxes"], out_dict["scores"], out_dict["labels"]
+
+
+class TraceWrapper(torch.nn.Module):
+    def __init__(self, model):
+        super().__init__()
+        self.model = model
+
+    def forward(self, inp):
+        out = self.model(inp)
+        return dict_to_tuple(out[0])
+
+
+def generate_jit_model(index):
+    model_funcs = [
+        torchvision.models.detection.fasterrcnn_resnet50_fpn,
+        torchvision.models.detection.maskrcnn_resnet50_fpn,
+    ]
+
+    model_func = model_funcs[index]
+    model = TraceWrapper(model_func(pretrained=True))
+
+    model.eval()
+    inp = torch.Tensor(np.random.uniform(0.0, 250.0, size=(1, 3, in_size, in_size)))
+
+    with torch.no_grad():
+        out = model(inp)
+
+        script_module = do_trace(model, inp)
+        script_out = script_module(inp)
+
+        assert len(out[0]) > 0 and len(script_out[0]) > 0
+        return script_module
+
+
+def test_detection_models(model_index, score_threshold=0.9):
+    img = "test_street_small.jpg"
+    img_url = (
+        "https://raw.githubusercontent.com/dmlc/web-data/"
+        "master/gluoncv/detection/street_small.jpg"
+    )
+    download(img_url, img)
+
+    input_shape = (1, 3, in_size, in_size)
+    target = "llvm"
+    input_name = "input0"
+    shape_list = [(input_name, input_shape)]
+
+    scripted_model = generate_jit_model(model_index)
+    mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
+
+    with tvm.transform.PassContext(opt_level=3, disabled_pass=["FoldScaleAxis"]):
+        vm_exec = relay.vm.compile(mod, target=target, params=params)
+
+    ctx = tvm.cpu()
+    vm = VirtualMachine(vm_exec, ctx)
+    data = process_image(img)
+    pt_res = scripted_model(data)
+    data = data.detach().numpy()
+    vm.set_input("main", **{input_name: data})
+    tvm_res = vm.run()
+
+    # Note: due to accumulated numerical error, we can't directly compare results
+    # with pytorch output. Some boxes might have a quite tiny difference in score
+    # and the order can become different. We just measure how many valid boxes
+    # there are for input image.
+    pt_scores = pt_res[1].detach().numpy().tolist()
+    tvm_scores = tvm_res[1].asnumpy().tolist()
+    num_pt_valid_scores = num_tvm_valid_scores = 0
+

Review comment:
       I'm comparing the two output (box coordinates etc) by eye balling the raw numerical values, and it looks good!
   
   I hope we can have a better way to test the outputs, for example extracting valid box indices based on score, sort indices by score, and sort boxes by sorted indices, like I did below. 
   
   ```
   In [59]: boxes_pt[ind_pt]                                                                                                                                                                        
   Out[59]: 
   array([[2.04335907e+02, 1.14787331e+02, 2.59456146e+02, 2.23669510e+02],
          [1.44117985e+01, 1.24377182e+02, 6.13694534e+01, 2.14236847e+02],
          [1.74448120e+02, 1.58607117e+02, 2.78158417e+02, 2.36064560e+02],
          [1.17156494e+02, 1.18118942e+02, 1.53017059e+02, 1.92442230e+02],
          [1.00772736e+02, 1.22123978e+02, 1.23872040e+02, 1.93398422e+02],
          [1.49618347e+02, 1.32603149e+02, 2.18598679e+02, 1.74433960e+02],
          [2.13966250e-01, 1.39350525e+02, 1.12648888e+01, 1.53912018e+02],
          [1.33723541e+02, 1.24649574e+02, 1.64407623e+02, 1.61921951e+02],
          [8.67264709e+01, 1.28565033e+02, 9.51557159e+01, 1.56289093e+02]],
         dtype=float32)
   
   In [60]: boxes_tvm[ind_tvm]                                                                                                                                                                   
   Out[60]: 
   array([[204.3359    , 114.78732   , 259.45615   , 223.66951   ],
          [ 14.411795  , 124.37717   ,  61.369446  , 214.23685   ],
          [174.44815   , 158.60712   , 278.1584    , 236.06454   ],
          [117.156494  , 118.118935  , 153.01706   , 192.44223   ],
          [100.772736  , 122.12396   , 123.87204   , 193.39842   ],
          [149.61836   , 132.60315   , 218.5987    , 174.43396   ],
          [  0.39432764, 139.76776   ,  11.332638  , 153.84328   ],
          [133.72354   , 124.64958   , 164.40762   , 161.92194   ],
          [ 86.72647   , 128.56502   ,  95.155716  , 156.28911   ]],
         dtype=float32)
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r489046236



##########
File path: tests/python/frontend/pytorch/test_object_detection.py
##########
@@ -0,0 +1,136 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Test torch vision fasterrcnn and maskrcnn models"""
+import numpy as np
+import torch
+import torchvision
+import cv2
+
+import tvm
+
+from tvm import relay
+from tvm.runtime.vm import VirtualMachine
+from tvm.contrib.download import download
+
+
+in_size = 512
+
+
+def process_image(img):
+    img = cv2.imread(img).astype("float32")
+    img = cv2.resize(img, (in_size, in_size))
+    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+    img = torch.from_numpy(img / 255.0).permute(2, 0, 1).float()
+    img = torch.unsqueeze(img, axis=0)
+
+    return img
+
+
+def do_trace(model, inp, in_size=in_size):
+    model_trace = torch.jit.trace(model, inp)
+    model_trace.eval()
+    return model_trace
+
+
+def dict_to_tuple(out_dict):
+    if "masks" in out_dict.keys():
+        return out_dict["boxes"], out_dict["scores"], out_dict["labels"], out_dict["masks"]
+    return out_dict["boxes"], out_dict["scores"], out_dict["labels"]
+
+
+class TraceWrapper(torch.nn.Module):
+    def __init__(self, model):
+        super().__init__()
+        self.model = model
+
+    def forward(self, inp):
+        out = self.model(inp)
+        return dict_to_tuple(out[0])
+
+
+def generate_jit_model(index):
+    model_funcs = [
+        torchvision.models.detection.fasterrcnn_resnet50_fpn,
+        torchvision.models.detection.maskrcnn_resnet50_fpn,
+    ]
+
+    model_func = model_funcs[index]
+    model = TraceWrapper(model_func(pretrained=True))
+
+    model.eval()
+    inp = torch.Tensor(np.random.uniform(0.0, 250.0, size=(1, 3, in_size, in_size)))
+
+    with torch.no_grad():
+        out = model(inp)
+
+        script_module = do_trace(model, inp)
+        script_out = script_module(inp)
+
+        assert len(out[0]) > 0 and len(script_out[0]) > 0
+        return script_module
+
+
+def test_detection_models(model_index, score_threshold=0.9):

Review comment:
       I think pytest cannot run this test (CI use pytest now). It should be a function without any arguments.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
t-vi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692274208


   But we should still know when it is appropriate to run the inference, should we not?
   I must admit I'm still troubled by the idea that we don't know beforehand which inputs to process with which method.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692999233


   > But which part raises the exception? infer_value_ or the casting that follows?
   
   It's for ```_infer_value```.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-693007551


   @t-vi For now IMO it doesn't bring obvious benefit of adding an extra common api to just wrap ```try ... except``` for ```_infer_value```, considering the lack of generalization among different dynamic op handlings. Alsowe shouldn't just limit to pytorch frontend for this topic since in tf frontend we have a lot of such patterns. Probably you can also check tf frontend to see what kind of general logic we can have for such pattern and discuss with community about the potential improvement. After that we can have a more complete solution for this. Sounds like a plan? 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692998861


   @masahi I rolled back topk to still _infer_value. Directly use dyn topk will cause a certain shape dim to be dynamic and cause get_valid_count issue. Compiling pt maskrcnn takes <3 mins, I agree with you that we might just need to test maskrcnn. fastercnn test removed.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692917238


   > @zhiics @kevinthesun @masahi
   > Thank you, @kevinthesun for your summary and all the work in the investigation and your PR.
   > 
   > I think using `if isinstance(..., _expr.Expr):` would be very much preferable to using exceptions.
   > 
   > 1. I see the uses of `try: ... except: ...` as using exceptions for regular control flow (because the error case is the one where the old normal logic is applicable and so we should have a clear view when the new logic is applicable and when not) and it would then except on the old cases,
   > 2. Not using `try: ... except: ...` for regular control flow seems like good programming fundamentals to me. It would seem odd if TVM as a compiler stack would not strive to follow best practice here.
   > 
   > Neither am I entirely sure whether 1. is contentious or not and to me it would seem that a PR is an odd place to form an opinion on 2. At the same time I see the construct as problematic enough to have a really hard time liking the current state of the PR. It would bring great joy if you could be convinced to move it to `if`.
   > 
   > I should emphasize that I'm entirely for having the new functions appreciate your @kevinthesun work on this. Thank you!
   
   @t-vi Thanks for your thoughts. To handle dynamic op correctly, we have to use ```if isinstance(..., _expr.Expr)``` together with ```try ... except```. Do we have an agreement that in the dynamic ops involved in this PR, ```try ... except``` is necessary? Currently there is no other way rather than try _infer_value to get constant attribute. Unfortunately this is not about good programming fundamentals but about functionality, since regular control flow won't do the trick. You can see the similar methodology in TensorFlow frontend. We need to do these because tf/pt od models are in the most complicated models which TVM has ever tried to compiled. As @masahi mentioned, we can gradually move these complicated dynamic inference logics to backend while dyn namespace is improved. However, currently these are something necessary to support OD models.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r489063663



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -1141,14 +1264,14 @@ def _impl(inputs, input_types):
             bias = inputs[0]
             return _op.nn.bias_add(dense_out, bias)
         else:
-            return dense_out
+            return dense_out + _expr.const(inputs[0])

Review comment:
       why this is needed? This is for use_bias == False case right?
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r489087457



##########
File path: tests/python/frontend/pytorch/test_object_detection.py
##########
@@ -0,0 +1,136 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Test torch vision fasterrcnn and maskrcnn models"""
+import numpy as np
+import torch
+import torchvision
+import cv2
+
+import tvm
+
+from tvm import relay
+from tvm.runtime.vm import VirtualMachine
+from tvm.contrib.download import download
+
+
+in_size = 512
+
+
+def process_image(img):
+    img = cv2.imread(img).astype("float32")
+    img = cv2.resize(img, (in_size, in_size))
+    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+    img = torch.from_numpy(img / 255.0).permute(2, 0, 1).float()
+    img = torch.unsqueeze(img, axis=0)
+
+    return img
+
+
+def do_trace(model, inp, in_size=in_size):
+    model_trace = torch.jit.trace(model, inp)
+    model_trace.eval()
+    return model_trace
+
+
+def dict_to_tuple(out_dict):
+    if "masks" in out_dict.keys():
+        return out_dict["boxes"], out_dict["scores"], out_dict["labels"], out_dict["masks"]
+    return out_dict["boxes"], out_dict["scores"], out_dict["labels"]
+
+
+class TraceWrapper(torch.nn.Module):
+    def __init__(self, model):
+        super().__init__()
+        self.model = model
+
+    def forward(self, inp):
+        out = self.model(inp)
+        return dict_to_tuple(out[0])
+
+
+def generate_jit_model(index):
+    model_funcs = [
+        torchvision.models.detection.fasterrcnn_resnet50_fpn,
+        torchvision.models.detection.maskrcnn_resnet50_fpn,
+    ]
+
+    model_func = model_funcs[index]
+    model = TraceWrapper(model_func(pretrained=True))
+
+    model.eval()
+    inp = torch.Tensor(np.random.uniform(0.0, 250.0, size=(1, 3, in_size, in_size)))
+
+    with torch.no_grad():
+        out = model(inp)
+
+        script_module = do_trace(model, inp)
+        script_out = script_module(inp)
+
+        assert len(out[0]) > 0 and len(script_out[0]) > 0
+        return script_module
+
+
+def test_detection_models(model_index, score_threshold=0.9):

Review comment:
       pytest doesn't run the main function of `test_forward.py` (that's why we don't get error on CI when we have typo introduced recently in `lstm_test`). Try running `pytest -k detection`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692373767


   @zhiics @masahi @t-vi Sure. One major thing in this PR is the handling for dynamic operators such as slice, arange and topk. These ops have dynamic attribute which affects relay type inference. The methodology here is to try to infer these values to make them as static as possible(Similar in tf parser). 
   
   @masahi suggested we can have an API wrap around ```try except``` blocks. However, the issue is for this method is for different ops the logic inside ```try except``` can be quite different and hard to generate a uniform interface. 
   
   @t-vi suggested we can check the input type to see whether we need to do such ```try except``` infer_value. Currently we differentiate between relay expr and numerical values. For these dynamic ops, we try to infer value when dynamic attribute is a Relay Expr to see whether it can become a numerical value. For all the subtype under relay Expr, we just call uniform interface of _infer_value to handle them.
   
   Any comments or suggestions? 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692383177


   Is moving to fully dynamic frontend like being done for onnx https://github.com/apache/incubator-tvm/pull/6351 help removing infer_value usage?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
t-vi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692994588


   But which part raises the exception? infer_value_ or the casting that follows?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r489066784



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -2043,6 +2201,151 @@ def _impl(inputs, input_types):
     return _impl
 
 
+def _roi_align(prelude):
+    def _impl(inputs, input_types):
+        data = inputs[0]
+        boxes = inputs[1]
+
+        output_size = (inputs[3], inputs[4])
+        spatial_scale = inputs[2]
+        sample_ratio = inputs[5]
+        aligned = False if len(inputs) < 7 else inputs[6]
+
+        if aligned:
+            boxes -= _expr.const(0.5 / spatial_scale)
+
+        return _op.vision.roi_align(data, boxes, output_size, spatial_scale, sample_ratio)
+
+    return _impl
+
+
+def _unbind():
+    def _impl(inputs, input_types):
+        data = inputs[0]
+        dim = int(inputs[1])
+        ishapes = _infer_shape(data)
+        if dim >= len(ishapes):
+            msg = "Please check input dim, it shouldn't" "be greater than or equal to rank."
+            raise AttributeError(msg)
+
+        selections = ishapes[dim]
+        res_split = _op.split(data, selections, dim)
+        # squeeze each split piece to get same shape as aten::unbind
+        # TODO (yongwww): add new op to avoid the squeeze overhead
+        ret = []
+        for i in range(selections):
+            ret.append(_op.transform.squeeze(res_split[i], axis=[dim]))
+        ret = _expr.TupleWrapper(_expr.Tuple(ret), selections)
+        return ret
+
+    return _impl
+
+
+def _shape_as_tensor(prelude):
+    def _impl(inputs, input_types):
+        is_symbolic_shape = False
+        input_shape = _infer_shape(inputs[0], prelude.mod)
+        for axis in input_shape:
+            if not isinstance(axis, (int, tvm.tir.IntImm)):
+                is_symbolic_shape = True
+                break
+
+        if is_symbolic_shape:
+            ret = _op.shape_of(inputs[0], dtype="int64")
+        else:
+            ret = _expr.const(np.array(input_shape), dtype="int64")
+
+        return ret
+
+    return _impl
+
+
+def _logical_and():
+    def _impl(inputs, input_types):
+        lhs = _op.cast(inputs[0], "bool")
+        rhs = _op.cast(inputs[1], "bool")
+
+        return _op.logical_and(lhs, rhs)
+
+    return _impl
+
+
+def _nonzero(is_numpy_style):
+    def _impl(inputs, input_types):
+        data = inputs[0]
+        ret = _op.transform.argwhere(data)
+
+        if is_numpy_style or (len(inputs) > 1 and inputs[1]):
+            # TODO(kevinthesun): Support this by adding unbind op
+            # ret = _unbind()([ret, 0], None)
+            raise RuntimeError("as_tuple is not supported yet for nonzero.")
+        return ret
+
+    return _impl
+
+
+def _scatter():
+    def _impl(inputs, input_types):
+        data = inputs[0]
+        axis = int(inputs[1])
+        index = inputs[2]
+        src = inputs[3]
+        return _op.transform.scatter(data, index, src, axis)
+
+    return _impl
+
+
+def _scalar_tensor():
+    def _impl(inputs, input_types):
+        data = inputs[0]
+        cast_map = {
+            6: "float32",
+            7: "float64",
+            3: "int32",
+            4: "int64",
+        }
+        type_key = inputs[1]
+        if isinstance(data, _expr.Constant):
+            data = data.data.asnumpy().tolist()
+        return _expr.const(data, cast_map[type_key])
+
+    return _impl
+
+
+def _interpolate():
+    def _impl(inputs, input_types):
+        if isinstance(inputs[1], _expr.Expr):
+            out_size = inputs[1]
+        elif isinstance(inputs[1], list):
+            try:
+                infer_res = [_infer_value(size, {}) for size in inputs[1]]
+                out_size = [np.asscalar(res.asnumpy().astype(np.int)) for res in infer_res]
+            except Exception:
+                h = _op.expand_dims(inputs[1][0], axis=0)
+                w = _op.expand_dims(inputs[1][1], axis=0)
+                out_size = _op.concatenate([h, w], axis=0)
+
+        data = inputs[0]
+        align_corners = inputs[4]
+        method = inputs[3]
+        if method.startswith("nearest"):
+            method = "nearest_neighbor"
+
+        if method == "nearest_neighbor":
+            coord_trans = "asymmetric"
+        elif align_corners:
+            coord_trans = "align_corners"
+        else:
+            coord_trans = "half_pixel"
+
+        def func(x):

Review comment:
       remove this `func`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] zhiics commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
zhiics commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r489041322



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -2961,6 +3293,7 @@ def from_pytorch(script_module, input_shapes, custom_convert_map=None, default_d
 
     graph = script_module.graph.copy()
     _run_jit_passes(graph)
+    print(graph)

Review comment:
       remove this line




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
t-vi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-693002214


   but wouldn't then a pattern of
   ```
   inferred_val = _try_infer_value(inp)
   if inferred_val is not None:
      res = foo(inferred_val)
   else:
      res = bar(inp)
   ```
   work?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692436699


   @kevinthesun I get this error from faster rcnn and mask rcnn. Is this expected?
   
   ```
     File "/mnt/2e797a66-fd2b-44fc-a3ba-24d7d65f2780/projects/dev/tvm/src/te/schedule/schedule_postproc_to_primfunc.cc", line 131
   TVMError: Check failed: allow_alloc: Cannot find the Realization point of tensor Tensor(shape=[1], op.name=box_data_length)
   During handling of the above exception, another exception occurred:
   
   TVMError: Check failed: allow_alloc: Cannot find the Realization point of tensor Tensor(shape=[1], op.name=box_data_length)
   Error during compile function
   -----------------------------
   #[version = "0.0.5"]
   fn (%p0: Tensor[(1, ?, ?), float32], Primitive=1) -> (Tensor[(1), int32], Tensor[(1, ?, ?), float32], Tensor[(1, ?), int32]) {
     vision.get_valid_counts(%p0, meta[relay.attrs.GetValidCountsAttrs][0]) /* ty=(Tensor[(1), int32], Tensor[(1, ?, ?), float32], Tensor[(1, ?), int32]) */
   }
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487296417



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       ```%11 : int = aten::size(%img.1, %10)``` generates int32 but ```%im_h : Long() = prim::NumToTensor(%11)``` automatically converts it to int64, without any hint. When we converting ```prim::NumToTenso```, we can just follow the input type which is int32 here since there is no any other information. So this is about the weird behavior of ```prim::NumToTenso``` rather than indexing. I'm not sure how many other ops in pytorch has such behavior, but it looks like inferring actual input type in ```_pytorch_promote_types``` would fix these kind of issues.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692392769


   > Is moving to fully dynamic frontend like being done for onnx #6351 help removing infer_value usage?
   
   While we try to make relay expression as static as possible, a lot of work can still only be done in frontend. For these cases we still need to use _infer_value. This happens a lot for tf/pt od models. For some simple ops such as topk, we can directly use dyn namespace op and eliminate _infer_value.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-691321674


   @masahi Coming soon. :D


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692285615


   > But we should still know when it is appropriate to run the inference, should we not?
   > I must admit I'm still troubled by the idea that we don't know beforehand which inputs to process with which method.
   
   We do this for some ops which have dynamic attributes. When those dynamic attributes are relay Expr, we need to try to infer value to make the generated relay program as static as possible. There is no good way to further tell which Relay Expr needs to be infered(And not necessary since _infer_value do general evaluation for Relay Expr).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-690898948






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
t-vi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692981543


   I think I'm slowly starting to understand. But couldn't one have something like `e_new = _try_to_make_static(e)` that takes an expression, makes it as static as possible and returns the result - which might be e if it cannot be static. Ideally, this would work without going through exceptions, but if we cannot have this, one could implement this with try: ... except: ... . The thing I'm uncomfortable with is having so many places where we do `try: ... exept:`. (sounds like @masahi suggested this above)


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r488366313



##########
File path: tests/python/frontend/pytorch/test_object_detection.py
##########
@@ -0,0 +1,131 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Test torch vision fasterrcnn and maskrcnn models"""
+import torch
+import torchvision
+import cv2
+
+import tvm
+
+from tvm import relay
+from tvm.runtime.vm import VirtualMachine
+from tvm.contrib.download import download
+
+
+in_size = 512
+
+def process_image(img):
+    img = cv2.imread(img).astype("float32")
+    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+    img = torch.from_numpy(img/255.).permute(2,0,1).float()
+    img = torch.unsqueeze(img, axis=0)
+
+    return img
+
+
+def do_trace(model, inp, in_size=in_size):
+    model_trace = torch.jit.trace(model, inp)
+    model_trace.eval()
+    return model_trace
+
+
+def dict_to_tuple(out_dict):
+    if "masks" in out_dict.keys():
+        return out_dict["boxes"], out_dict["scores"], out_dict["labels"], out_dict["masks"]
+    return out_dict["boxes"], out_dict["scores"], out_dict["labels"]
+
+
+class TraceWrapper(torch.nn.Module):
+    def __init__(self, model):
+        super().__init__()
+        self.model = model
+
+    def forward(self, inp):
+        out = self.model(inp)
+        return dict_to_tuple(out[0])
+
+
+def generate_jit_model(index, img):
+    model_funcs = [torchvision.models.detection.fasterrcnn_resnet50_fpn,
+                   torchvision.models.detection.maskrcnn_resnet50_fpn]
+
+    model_func = model_funcs[index]
+    model = TraceWrapper(model_func(pretrained=True))
+
+    model.eval()
+    inp = process_image(img)
+
+    with torch.no_grad():
+        out = model(inp)
+
+        script_module = do_trace(model, inp)
+        script_out = script_module(inp)
+
+        assert len(out[0]) > 0 and len(script_out[0]) > 0
+        torch._C._jit_pass_inline(script_module.graph)
+        return script_module
+
+
+def test_detection_models(model_index, score_threshold=0.9):
+    img = "test_street_small.jpg"
+    img_url = "https://raw.githubusercontent.com/dmlc/web-data/" \
+              "master/gluoncv/detection/street_small.jpg"
+    download(img_url, img)
+
+    input_shape = (1, 3, in_size, in_size)
+    target = "llvm"
+    input_name = 'input0'
+    shape_list = [(input_name, input_shape)]
+
+    scripted_model = generate_jit_model(model_index, img)
+    mod, params = relay.frontend.from_pytorch(scripted_model,
+                                              shape_list)
+
+    with tvm.transform.PassContext(opt_level=3, disabled_pass=["FoldScaleAxis"]):
+        vm_exec = relay.vm.compile(mod, target=target, params=params)
+
+    ctx = tvm.cpu()
+    vm = VirtualMachine(vm_exec, ctx)
+    data = process_image(img)

Review comment:
       I think it is better to use input img different from the one used in tracing, to make sure compiled model is not tied to a specific input. Maybe we can just use a random image for tracing.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-690855865


   @kevinthesun Thanks for working on this. Can you split this into multiple PRs? In particular, besides the new op conversion, you made many non trivial changes to existng ops. Without tests for the latter changes, it is hard to tell what they are for. 
   
   We can merge the new op conversion first (as they came with tests).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-690846159


   cc @siju-samuel @t-vi 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692285615


   > But we should still know when it is appropriate to run the inference, should we not?
   > I must admit I'm still troubled by the idea that we don't know beforehand which inputs to process with which method.
   
   We do this for some ops which have dynamic attributes. When those dynamic attributes are relay Expr, we need to try to infer value to make the generated relay program as static as possible. There is no good way to further tell which Relay Expr needs to be inferred(And not necessary since _infer_value do general evaluation for Relay Expr).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692975419


   Original pt frontend just handles limited cases, mostly static shape/attributes. It is fine we just keep input as it is. For more dynamic models, we need to do some extra work to reduce the dynamism during type inference. For example, there is a chance to reduce output shape of (?, ?, ?) to (1, ?, ?) in a dynamic op. This is necessary otherwise it's hard to ensure we are doing the right thing for backend. That error pointed out by @masahi is exactly the case. The input shape of ```get_valid_counts``` should be (1, ?, 5) while somehow recent change makes it (1, ?, ?). ```get_valid_counts``` doesn't allow dynamic box data length. This is an example why we need to make the output relay Expr as static as possible and ```_infer_shape``` is necessary.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r489086661



##########
File path: tests/python/frontend/pytorch/test_object_detection.py
##########
@@ -0,0 +1,136 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Test torch vision fasterrcnn and maskrcnn models"""
+import numpy as np
+import torch
+import torchvision
+import cv2
+
+import tvm
+
+from tvm import relay
+from tvm.runtime.vm import VirtualMachine
+from tvm.contrib.download import download
+
+
+in_size = 512
+
+
+def process_image(img):
+    img = cv2.imread(img).astype("float32")
+    img = cv2.resize(img, (in_size, in_size))
+    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+    img = torch.from_numpy(img / 255.0).permute(2, 0, 1).float()
+    img = torch.unsqueeze(img, axis=0)
+
+    return img
+
+
+def do_trace(model, inp, in_size=in_size):
+    model_trace = torch.jit.trace(model, inp)
+    model_trace.eval()
+    return model_trace
+
+
+def dict_to_tuple(out_dict):
+    if "masks" in out_dict.keys():
+        return out_dict["boxes"], out_dict["scores"], out_dict["labels"], out_dict["masks"]
+    return out_dict["boxes"], out_dict["scores"], out_dict["labels"]
+
+
+class TraceWrapper(torch.nn.Module):
+    def __init__(self, model):
+        super().__init__()
+        self.model = model
+
+    def forward(self, inp):
+        out = self.model(inp)
+        return dict_to_tuple(out[0])
+
+
+def generate_jit_model(index):
+    model_funcs = [
+        torchvision.models.detection.fasterrcnn_resnet50_fpn,
+        torchvision.models.detection.maskrcnn_resnet50_fpn,
+    ]
+
+    model_func = model_funcs[index]
+    model = TraceWrapper(model_func(pretrained=True))
+
+    model.eval()
+    inp = torch.Tensor(np.random.uniform(0.0, 250.0, size=(1, 3, in_size, in_size)))
+
+    with torch.no_grad():
+        out = model(inp)
+
+        script_module = do_trace(model, inp)
+        script_out = script_module(inp)
+
+        assert len(out[0]) > 0 and len(script_out[0]) > 0
+        return script_module
+
+
+def test_detection_models(model_index, score_threshold=0.9):

Review comment:
       This is called in test_forward. Looks like pytorch test_forward.py doesn't pytest yet?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692441507


   @kevinthesun Thanks I'm trying running e2e on my end. How long does it take to compile faster or mask rcnn? I remember hearing TF faster rcnn taking 20 min to compile. If it is too slow, it might not be a good idea to run them on CI...


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-691533504


   If `try ... except` is necessary, I recommend adding a wrapper function around `_infer_value`, and isolate `try ... except` logic there. I see many repeated `try ... except`


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487256873



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       This comes from weird behavior of ```prim::NumToTensor```. It converts int32 to int64 silently:
   ```
   %11 : int = aten::size(%img.1, %10), scope: __module.model # /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
     %im_h : Long() = prim::NumToTensor(%11), scope: __module.model
   ```
   Right now py frontend just follow use the same dtype for this op output. For an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine. However, the actual input dtype is ["int64", "int32"]. What I can do is to enhance ```_pytorch_promote_types``` so that we do _infer_type for every input and get actual input dtype, rather than solely relying on pytorch input dtype. Sounds like a plan?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The try except block is mainly for _infer_value. Currently there is no very secure way to try _infer_value with explicit error types. That's why a general Exception is used here.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       ```%11 : int = aten::size(%img.1, %10)``` generates int32 but ```%im_h : Long() = prim::NumToTensor(%11)``` automatically converts it to int64, without any hint. When we converting ```prim::NumToTenso```, we can just follow the input type which is int32 here since there is no any other information. So this is about the weird behavior of ```prim::NumToTenso``` rather than indexing. I'm not sure how many other ops in pytorch has such behavior, but it looks like inferring actual input type in ```_pytorch_promote_types``` would fix these kind of issues.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Sure. I can do what I did for arange. It's checking whether input is type _expr.Expr.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       ```if isinstance(inputs[3], _expr.Expr):```

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Same. These try except blocks are necessary to handle dynamic operators.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       Use int64 now.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-690855865






----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi merged pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] zhiics commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
zhiics commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692367505


   Thanks for the efforts and discussions. @kevinthesun could you please summarize the solutions/decisions to align with @masahi and @t-vi? 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-693016775


   @masahi Yeah. It probably worth a forum discussion thread before we proceed with any solution. In tf frontend, there are even more ```try .. except``` blocks which doesn't involve ```_infer_value```(thanks to the complexity of tf frontend :D) I think it would be helpful to discuss overall what kind of style is recommended(or necessary) in tvm frontend. After that, a major refactor can happen for at least tf and pt frontends.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] masahi commented on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-693787449


   Thanks @kevinthesun for the great work!!
   Thanks everyone for review.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] kevinthesun edited a comment on pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
kevinthesun edited a comment on pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#issuecomment-692392769


   > Is moving to fully dynamic frontend like being done for onnx #6351 help removing infer_value usage?
   
   While we try to make relay expression as static as possible, a lot of work can still only be done in frontend. For these cases we still need to use _infer_value. This happens a lot for tf/pt od models. For some simple ops such as topk, we can directly use dyn namespace op and eliminate _infer_value. Later when we gradually improve dynamic ops it's possible to eliminate more _infer_values.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] t-vi commented on a change in pull request #6449: [Frontend][Pytorch] Improve Pytorch frontend for object detection models

Posted by GitBox <gi...@apache.org>.
t-vi commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r486806914



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I must admit that I'd appreciate if there were more commentary to the typing changes here.
   - In my opinion (and I could be wrong), it would be helpful to have a view what kind of types `input_types` and `inputs` can have and have a single place where we do implicit type promotion. I had hoped `_pytorch_promote_types` could be that.
   - If `_pytorch_promote_types` doesn't do the job, maybe we can comment why it isn't. Also why is this particular apparently particular elementwise ops as opposed to amending `_pytorch_promote_types`?
   
   I know this looks like I'm asking for busywork when you're mostly interested in getting a particular to work, but I have the impression that we would want to avoid ad hoc type workarounds as much as possible if we want to avoid having subtle bugs whenever someone uses something outside what our unit tests catch.
   

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The int is not needed here?
   Also it might be worth trying to avoid `try: ... except Exception:` during non-error-processing in favour of `if isinstance(....): ... else:`.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       For which types do we want to do this (or alternatively which can go straight through)?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       int32 here and index_size limit 2**63-1 feels strange.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Here, too, maybe avoid: `try: .. except:`
   (there are more places, I didn't flag them all, but I think they should be all changed to use plain `if`).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Still, I would prefer looking at what the type of `inputs[1]` is and have an `if`. We should at least know which types are good to leave as is (the current except block).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I think we would eventually want to look at using type propagation more.
   However, the issue here is that PyTorch's default dtype for integral tensors is int64. I don't think we should be hacking around that, really, because we're bound to end up with cases where int64 is the right thing to have. If I understood the discussions on the forum correctly, the idea was to downcast 64 bit indexing to 32 based if it is considered safe.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       I'd have a strong preference for that, yeah.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I must admit that I'd appreciate if there were more commentary to the typing changes here.
   - In my opinion (and I could be wrong), it would be helpful to have a view what kind of types `input_types` and `inputs` can have and have a single place where we do implicit type promotion. I had hoped `_pytorch_promote_types` could be that.
   - If `_pytorch_promote_types` doesn't do the job, maybe we can comment why it isn't. Also why is this particular apparently particular elementwise ops as opposed to amending `_pytorch_promote_types`?
   
   I know this looks like I'm asking for busywork when you're mostly interested in getting a particular to work, but I have the impression that we would want to avoid ad hoc type workarounds as much as possible if we want to avoid having subtle bugs whenever someone uses something outside what our unit tests catch.
   

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The int is not needed here?
   Also it might be worth trying to avoid `try: ... except Exception:` during non-error-processing in favour of `if isinstance(....): ... else:`.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       For which types do we want to do this (or alternatively which can go straight through)?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       int32 here and index_size limit 2**63-1 feels strange.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Here, too, maybe avoid: `try: .. except:`
   (there are more places, I didn't flag them all, but I think they should be all changed to use plain `if`).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Still, I would prefer looking at what the type of `inputs[1]` is and have an `if`. We should at least know which types are good to leave as is (the current except block).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I think we would eventually want to look at using type propagation more.
   However, the issue here is that PyTorch's default dtype for integral tensors is int64. I don't think we should be hacking around that, really, because we're bound to end up with cases where int64 is the right thing to have. If I understood the discussions on the forum correctly, the idea was to downcast 64 bit indexing to 32 based if it is considered safe.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       I'd have a strong preference for that, yeah.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I must admit that I'd appreciate if there were more commentary to the typing changes here.
   - In my opinion (and I could be wrong), it would be helpful to have a view what kind of types `input_types` and `inputs` can have and have a single place where we do implicit type promotion. I had hoped `_pytorch_promote_types` could be that.
   - If `_pytorch_promote_types` doesn't do the job, maybe we can comment why it isn't. Also why is this particular apparently particular elementwise ops as opposed to amending `_pytorch_promote_types`?
   
   I know this looks like I'm asking for busywork when you're mostly interested in getting a particular to work, but I have the impression that we would want to avoid ad hoc type workarounds as much as possible if we want to avoid having subtle bugs whenever someone uses something outside what our unit tests catch.
   

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The int is not needed here?
   Also it might be worth trying to avoid `try: ... except Exception:` during non-error-processing in favour of `if isinstance(....): ... else:`.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       For which types do we want to do this (or alternatively which can go straight through)?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       int32 here and index_size limit 2**63-1 feels strange.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Here, too, maybe avoid: `try: .. except:`
   (there are more places, I didn't flag them all, but I think they should be all changed to use plain `if`).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Still, I would prefer looking at what the type of `inputs[1]` is and have an `if`. We should at least know which types are good to leave as is (the current except block).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I think we would eventually want to look at using type propagation more.
   However, the issue here is that PyTorch's default dtype for integral tensors is int64. I don't think we should be hacking around that, really, because we're bound to end up with cases where int64 is the right thing to have. If I understood the discussions on the forum correctly, the idea was to downcast 64 bit indexing to 32 based if it is considered safe.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       I'd have a strong preference for that, yeah.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I must admit that I'd appreciate if there were more commentary to the typing changes here.
   - In my opinion (and I could be wrong), it would be helpful to have a view what kind of types `input_types` and `inputs` can have and have a single place where we do implicit type promotion. I had hoped `_pytorch_promote_types` could be that.
   - If `_pytorch_promote_types` doesn't do the job, maybe we can comment why it isn't. Also why is this particular apparently particular elementwise ops as opposed to amending `_pytorch_promote_types`?
   
   I know this looks like I'm asking for busywork when you're mostly interested in getting a particular to work, but I have the impression that we would want to avoid ad hoc type workarounds as much as possible if we want to avoid having subtle bugs whenever someone uses something outside what our unit tests catch.
   

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The int is not needed here?
   Also it might be worth trying to avoid `try: ... except Exception:` during non-error-processing in favour of `if isinstance(....): ... else:`.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       For which types do we want to do this (or alternatively which can go straight through)?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       int32 here and index_size limit 2**63-1 feels strange.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Here, too, maybe avoid: `try: .. except:`
   (there are more places, I didn't flag them all, but I think they should be all changed to use plain `if`).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Still, I would prefer looking at what the type of `inputs[1]` is and have an `if`. We should at least know which types are good to leave as is (the current except block).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I think we would eventually want to look at using type propagation more.
   However, the issue here is that PyTorch's default dtype for integral tensors is int64. I don't think we should be hacking around that, really, because we're bound to end up with cases where int64 is the right thing to have. If I understood the discussions on the forum correctly, the idea was to downcast 64 bit indexing to 32 based if it is considered safe.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       I'd have a strong preference for that, yeah.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I must admit that I'd appreciate if there were more commentary to the typing changes here.
   - In my opinion (and I could be wrong), it would be helpful to have a view what kind of types `input_types` and `inputs` can have and have a single place where we do implicit type promotion. I had hoped `_pytorch_promote_types` could be that.
   - If `_pytorch_promote_types` doesn't do the job, maybe we can comment why it isn't. Also why is this particular apparently particular elementwise ops as opposed to amending `_pytorch_promote_types`?
   
   I know this looks like I'm asking for busywork when you're mostly interested in getting a particular to work, but I have the impression that we would want to avoid ad hoc type workarounds as much as possible if we want to avoid having subtle bugs whenever someone uses something outside what our unit tests catch.
   

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The int is not needed here?
   Also it might be worth trying to avoid `try: ... except Exception:` during non-error-processing in favour of `if isinstance(....): ... else:`.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       For which types do we want to do this (or alternatively which can go straight through)?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       int32 here and index_size limit 2**63-1 feels strange.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Here, too, maybe avoid: `try: .. except:`
   (there are more places, I didn't flag them all, but I think they should be all changed to use plain `if`).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Still, I would prefer looking at what the type of `inputs[1]` is and have an `if`. We should at least know which types are good to leave as is (the current except block).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I think we would eventually want to look at using type propagation more.
   However, the issue here is that PyTorch's default dtype for integral tensors is int64. I don't think we should be hacking around that, really, because we're bound to end up with cases where int64 is the right thing to have. If I understood the discussions on the forum correctly, the idea was to downcast 64 bit indexing to 32 based if it is considered safe.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       I'd have a strong preference for that, yeah.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I must admit that I'd appreciate if there were more commentary to the typing changes here.
   - In my opinion (and I could be wrong), it would be helpful to have a view what kind of types `input_types` and `inputs` can have and have a single place where we do implicit type promotion. I had hoped `_pytorch_promote_types` could be that.
   - If `_pytorch_promote_types` doesn't do the job, maybe we can comment why it isn't. Also why is this particular apparently particular elementwise ops as opposed to amending `_pytorch_promote_types`?
   
   I know this looks like I'm asking for busywork when you're mostly interested in getting a particular to work, but I have the impression that we would want to avoid ad hoc type workarounds as much as possible if we want to avoid having subtle bugs whenever someone uses something outside what our unit tests catch.
   

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The int is not needed here?
   Also it might be worth trying to avoid `try: ... except Exception:` during non-error-processing in favour of `if isinstance(....): ... else:`.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       For which types do we want to do this (or alternatively which can go straight through)?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       int32 here and index_size limit 2**63-1 feels strange.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Here, too, maybe avoid: `try: .. except:`
   (there are more places, I didn't flag them all, but I think they should be all changed to use plain `if`).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I must admit that I'd appreciate if there were more commentary to the typing changes here.
   - In my opinion (and I could be wrong), it would be helpful to have a view what kind of types `input_types` and `inputs` can have and have a single place where we do implicit type promotion. I had hoped `_pytorch_promote_types` could be that.
   - If `_pytorch_promote_types` doesn't do the job, maybe we can comment why it isn't. Also why is this particular apparently particular elementwise ops as opposed to amending `_pytorch_promote_types`?
   
   I know this looks like I'm asking for busywork when you're mostly interested in getting a particular to work, but I have the impression that we would want to avoid ad hoc type workarounds as much as possible if we want to avoid having subtle bugs whenever someone uses something outside what our unit tests catch.
   

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The int is not needed here?
   Also it might be worth trying to avoid `try: ... except Exception:` during non-error-processing in favour of `if isinstance(....): ... else:`.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       For which types do we want to do this (or alternatively which can go straight through)?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       int32 here and index_size limit 2**63-1 feels strange.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Here, too, maybe avoid: `try: .. except:`
   (there are more places, I didn't flag them all, but I think they should be all changed to use plain `if`).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Still, I would prefer looking at what the type of `inputs[1]` is and have an `if`. We should at least know which types are good to leave as is (the current except block).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I think we would eventually want to look at using type propagation more.
   However, the issue here is that PyTorch's default dtype for integral tensors is int64. I don't think we should be hacking around that, really, because we're bound to end up with cases where int64 is the right thing to have. If I understood the discussions on the forum correctly, the idea was to downcast 64 bit indexing to 32 based if it is considered safe.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       I'd have a strong preference for that, yeah.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I must admit that I'd appreciate if there were more commentary to the typing changes here.
   - In my opinion (and I could be wrong), it would be helpful to have a view what kind of types `input_types` and `inputs` can have and have a single place where we do implicit type promotion. I had hoped `_pytorch_promote_types` could be that.
   - If `_pytorch_promote_types` doesn't do the job, maybe we can comment why it isn't. Also why is this particular apparently particular elementwise ops as opposed to amending `_pytorch_promote_types`?
   
   I know this looks like I'm asking for busywork when you're mostly interested in getting a particular to work, but I have the impression that we would want to avoid ad hoc type workarounds as much as possible if we want to avoid having subtle bugs whenever someone uses something outside what our unit tests catch.
   

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       The int is not needed here?
   Also it might be worth trying to avoid `try: ... except Exception:` during non-error-processing in favour of `if isinstance(....): ... else:`.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       For which types do we want to do this (or alternatively which can go straight through)?

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")

Review comment:
       int32 here and index_size limit 2**63-1 feels strange.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -429,25 +507,56 @@ def _impl(inputs, input_types):
 
     return _impl
 
+def _full_impl(data, fill_value, dtype):
+    size = []
+    need_reshape = False
+    new_shape = []
+    for dim in data:
+        if isinstance(dim, _expr.Expr):
+            if isinstance(dim, _expr.Constant):
+                dim = int(dim.data.asnumpy())
+                if isinstance(size, list):
+                    size.append(dim)
+                new_shape.append(dim)
+            else:
+                try:
+                    dim = int(_infer_value(dim, {}).asnumpy())

Review comment:
       Here, too, maybe avoid: `try: .. except:`
   (there are more places, I didn't flag them all, but I think they should be all changed to use plain `if`).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -364,7 +438,11 @@ def _impl(inputs, input_types):
 def _topk():
     def _impl(inputs, input_types):
         data = inputs[0]
-        k = int(inputs[1])
+        try:
+            k = int(_infer_value(inputs[1], {}).asnumpy().tolist())
+            k = _expr.const(k)
+        except Exception:
+            k = inputs[1]

Review comment:
       Still, I would prefer looking at what the type of `inputs[1]` is and have an `if`. We should at least know which types are good to leave as is (the current except block).

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       I think we would eventually want to look at using type propagation more.
   However, the issue here is that PyTorch's default dtype for integral tensors is int64. I don't think we should be hacking around that, really, because we're bound to end up with cases where int64 is the right thing to have. If I understood the discussions on the forum correctly, the idea was to downcast 64 bit indexing to 32 based if it is considered safe.

##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -274,38 +295,91 @@ def _impl(inputs, input_types):
 
 def _slice():
     def _impl(inputs, input_types):
+        index_size_limit = 2**63 - 1
         data = inputs[0]
-        strides = []
+        dshape = _infer_shape(data)
+        ndim = len(dshape)
+        end = []
+        for dim in dshape:
+            if isinstance(dim, tvm.tir.Any):
+                end = _op.shape_of(data)
+                break
+            end.append(int(dim))
 
-        if isinstance(data, _expr.Expr):
-            inferred_shape = _infer_shape(data)
-            end = []
-            for infer in inferred_shape:
-                end.append(int(infer))
-            if isinstance(data, _expr.Var):
-                end = inferred_shape
-                end = list(end)
-        else:
-            end = data.shape
-
-        begin = [0] * len(end)
+        begin = [0] * ndim
         dim = int(inputs[1])
+        stride = int(inputs[4])
         if isinstance(inputs[2], _expr.Call):
-            begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            try:
+                begin[dim] = np.asscalar(_infer_value(inputs[2], {}).asnumpy().astype(np.int))
+            except Exception:
+                begin[dim] = inputs[2]
         else:
             begin[dim] = int(inputs[2])
 
+        # Process begin
+        if not isinstance(begin[dim], int):
+            tmp = []
+            for b in begin:
+                if isinstance(b, int):
+                    tmp.append(_op.expand_dims(_expr.const(b, "int64"), axis=0))
+                else:
+                    tmp.append(_op.cast(_op.expand_dims(b, axis=0), "int64"))
+            begin = _op.concatenate(tmp, axis=0)
+            btype = _infer_type(begin).checked_type.dtype
+            if str(btype) != "int32":
+                begin = _op.cast(begin, "int32")
+
         if isinstance(inputs[3], str) and inputs[3].isdigit():
-            end[dim] = min(end[dim], int(inputs[3]))
+            target_end = int(inputs[3])
         else:
-            if isinstance(inputs[3], _expr.Call):
-                target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+            if isinstance(inputs[3], _expr.Expr):
+                try:
+                    target_end = np.asscalar(_infer_value(inputs[3], {}).asnumpy().astype(np.int))
+                except Exception:

Review comment:
       I'd have a strong preference for that, yeah.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org