You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/05/04 13:52:49 UTC

[GitHub] [tvm] daniperfer opened a new issue #7971: Runtime error when tracing maskrcnn model: Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions

daniperfer opened a new issue #7971:
URL: https://github.com/apache/tvm/issues/7971


   Hi:
   
   I am trying to follow the tutorial in `tutorials/frontend/deploy_object_detection_pytorch.py`, but got the following error:
   
   ```
   /home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torch/tensor.py:593: RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
     'incorrect results).', category=RuntimeWarning)
   /home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torch/nn/functional.py:3123: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
     dtype=torch.float32)).float())) for i in range(dim)]
   /home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torchvision/models/detection/anchor_utils.py:147: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
     torch.tensor(image_size[1] // g[1], dtype=torch.int64, device=device)] for g in grid_sizes]
   /home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torchvision/ops/boxes.py:128: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
     boxes_x = torch.min(boxes_x, torch.tensor(width, dtype=boxes.dtype, device=boxes.device))
   /home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torchvision/ops/boxes.py:130: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
     boxes_y = torch.min(boxes_y, torch.tensor(height, dtype=boxes.dtype, device=boxes.device))
   /home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torchvision/models/detection/transform.py:271: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
     for s, s_orig in zip(new_size, original_size)
   /home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torchvision/models/detection/roi_heads.py:372: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
     return torch.tensor(M + 2 * padding).to(torch.float32) / torch.tensor(M).to(torch.float32)
   Traceback (most recent call last):
     File "tutorials/frontend/deploy_object_detection_pytorch.py", line 95, in <module>
       script_module = do_trace(model, inp)
     File "tutorials/frontend/deploy_object_detection_pytorch.py", line 65, in do_trace
       model_trace = torch.jit.trace(model, inp)
     File "/home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torch/jit/_trace.py", line 742, in trace
       _module_class,
     File "/home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torch/jit/_trace.py", line 940, in trace_module
       _force_outplace,
   RuntimeError: Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions
   ```
   First of all, I have built and installed TVM according to the steps described in the [Host setup and docker build](https://tvm.apache.org/docs/deploy/vitis_ai.html#host-setup-and-docker-build) section of Vitis-AI integration tutorial.
   
   Then, I slightly modified the `deploy_object_detection_pytorch.py` script (see code below), and launched it under the pytorch conda environment, inside docker:
   > `python tutorials/frontend/deploy_object_detection_pytorch.py`
   
   And I got this error: `RuntimeError: Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions`
   
   Code of the slightly modified script `deploy_object_detection_pytorch.py`:
   ```
   # Licensed to the Apache Software Foundation (ASF) under one
   # or more contributor license agreements.  See the NOTICE file
   # distributed with this work for additional information
   # regarding copyright ownership.  The ASF licenses this file
   # to you under the Apache License, Version 2.0 (the
   # "License"); you may not use this file except in compliance
   # with the License.  You may obtain a copy of the License at
   #
   #   http://www.apache.org/licenses/LICENSE-2.0
   #
   # Unless required by applicable law or agreed to in writing,
   # software distributed under the License is distributed on an
   # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
   # KIND, either express or implied.  See the License for the
   # specific language governing permissions and limitations
   # under the License.
   """
   Compile PyTorch Object Detection Models
   =======================================
   This article is an introductory tutorial to deploy PyTorch object
   detection models with Relay VM.
   
   For us to begin with, PyTorch should be installed.
   TorchVision is also required since we will be using it as our model zoo.
   
   A quick solution is to install via pip
   
   .. code-block:: bash
   
       pip install torch==1.7.0
       pip install torchvision==0.8.1
   
   or please refer to official site
   https://pytorch.org/get-started/locally/
   
   PyTorch versions should be backwards compatible but should be used
   with the proper TorchVision version.
   
   Currently, TVM supports PyTorch 1.7 and 1.4. Other versions may
   be unstable.
   """
   
   import tvm
   from tvm import relay
   from tvm import relay
   from tvm.runtime.vm import VirtualMachine
   from tvm.contrib.download import download
   
   import numpy as np
   import cv2
   
   # PyTorch imports
   import torch
   import torchvision
   
   ######################################################################
   # Load pre-trained maskrcnn from torchvision and do tracing
   # ---------------------------------------------------------
   in_size = 300
   
   input_shape = (1, 3, in_size, in_size)
   
   
   def do_trace(model, inp):
       model_trace = torch.jit.trace(model, inp)
       model_trace.eval()
       return model_trace
   
   
   def dict_to_tuple(out_dict):
       if "masks" in out_dict.keys():
           return out_dict["boxes"], out_dict["scores"], out_dict["labels"], out_dict["masks"]
       return out_dict["boxes"], out_dict["scores"], out_dict["labels"]
   
   
   class TraceWrapper(torch.nn.Module):
       def __init__(self, model):
           super().__init__()
           self.model = model
   
       def forward(self, inp):
           out = self.model(inp)
           return dict_to_tuple(out[0])
   
   
   # model_func = torchvision.models.detection.maskrcnn_resnet50_fpn
   # model = TraceWrapper(model_func(pretrained=True))
   ####################################################################
   # THIS IS THE ONLY MODIFICATION I MADE TO THE ORIGINAL TUTORIAL CODE 
   model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
   
   model.eval()
   inp = torch.Tensor(np.random.uniform(0.0, 250.0, size=(1, 3, in_size, in_size)))
   
   with torch.no_grad():
       out = model(inp)
       script_module = do_trace(model, inp)
   
   ######################################################################
   # Download a test image and pre-process
   # -------------------------------------
   img_path = "test_street_small.jpg"
   img_url = (
       "https://raw.githubusercontent.com/dmlc/web-data/" "master/gluoncv/detection/street_small.jpg"
   )
   download(img_url, img_path)
   
   img = cv2.imread(img_path).astype("float32")
   img = cv2.resize(img, (in_size, in_size))
   img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
   img = np.transpose(img / 255.0, [2, 0, 1])
   img = np.expand_dims(img, axis=0)
   
   ######################################################################
   # Import the graph to Relay
   # -------------------------
   input_name = "input0"
   shape_list = [(input_name, input_shape)]
   mod, params = relay.frontend.from_pytorch(script_module, shape_list)
   
   ######################################################################
   # Compile with Relay VM
   # ---------------------
   # Note: Currently only CPU target is supported. For x86 target, it is
   # highly recommended to build TVM with Intel MKL and Intel OpenMP to get
   # best performance, due to the existence of large dense operator in
   # torchvision rcnn models.
   
   # Add "-libs=mkl" to get best performance on x86 target.
   # For x86 machine supports AVX512, the complete target is
   # "llvm -mcpu=skylake-avx512 -libs=mkl"
   target = "llvm"
   
   with tvm.transform.PassContext(opt_level=3, disabled_pass=["FoldScaleAxis"]):
       vm_exec = relay.vm.compile(mod, target=target, params=params)
   
   ######################################################################
   # Inference with Relay VM
   # -----------------------
   dev = tvm.cpu()
   vm = VirtualMachine(vm_exec, dev)
   vm.set_input("main", **{input_name: img})
   tvm_res = vm.run()
   
   ######################################################################
   # Get boxes with score larger than 0.9
   # ------------------------------------
   score_threshold = 0.9
   boxes = tvm_res[0].asnumpy().tolist()
   valid_boxes = []
   for i, score in enumerate(tvm_res[1].asnumpy().tolist()):
       if score > score_threshold:
           valid_boxes.append(boxes[i])
       else:
           break
   
   print("Get {} valid boxes".format(len(valid_boxes)))
   
   ```
   
   I would like to know if there is anything wrong, and what I can do to successfully run the PyTorch Object Detection Tutorial.
   
   Thanks.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi closed issue #7971: Runtime error when tracing maskrcnn model: Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions

Posted by GitBox <gi...@apache.org>.
masahi closed issue #7971:
URL: https://github.com/apache/tvm/issues/7971


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] daniperfer commented on issue #7971: Runtime error when tracing maskrcnn model: Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions

Posted by GitBox <gi...@apache.org>.
daniperfer commented on issue #7971:
URL: https://github.com/apache/tvm/issues/7971#issuecomment-992612913


   Yes, it's in #7990.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] daniperfer commented on issue #7971: Runtime error when tracing maskrcnn model: Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions

Posted by GitBox <gi...@apache.org>.
daniperfer commented on issue #7971:
URL: https://github.com/apache/tvm/issues/7971#issuecomment-833597271


   I will reformulate the question in a new issue, since this one is already closed...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] daniperfer commented on issue #7971: Runtime error when tracing maskrcnn model: Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions

Posted by GitBox <gi...@apache.org>.
daniperfer commented on issue #7971:
URL: https://github.com/apache/tvm/issues/7971#issuecomment-832759852


   Thanks for answering @masahi.
   I see what you said. I have removed my modifications to the tutorial script, back to the original code, and I've launched again the original script `tutorials/frontend/deploy_object_detection_pytorch.py`, which uses the `TraceWrapper` class.
   
   However, I got a different error this time: **LLVM ERROR: out of memory. Aborted (core dumped)**.
   
   Any thoughts on why that error could have happened?
   
   ```
   (my-vitis-ai-pytorch) Vitis-AI ~/tvm > python tutorials/frontend/deploy_object_detection_pytorch.py
   /home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torch/tensor.py:593: RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
     'incorrect results).', category=RuntimeWarning)
   /home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torch/nn/functional.py:3123: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
     dtype=torch.float32)).float())) for i in range(dim)]
   /home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torchvision/models/detection/anchor_utils.py:147: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
     torch.tensor(image_size[1] // g[1], dtype=torch.int64, device=device)] for g in grid_sizes]
   /home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torchvision/ops/boxes.py:128: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
     boxes_x = torch.min(boxes_x, torch.tensor(width, dtype=boxes.dtype, device=boxes.device))
   /home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torchvision/ops/boxes.py:130: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
     boxes_y = torch.min(boxes_y, torch.tensor(height, dtype=boxes.dtype, device=boxes.device))
   /home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torchvision/models/detection/transform.py:271: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
     for s, s_orig in zip(new_size, original_size)
   /home/vitis-ai-user/.conda/envs/my-vitis-ai-pytorch/lib/python3.6/site-packages/torchvision/models/detection/roi_heads.py:372: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
     return torch.tensor(M + 2 * padding).to(torch.float32) / torch.tensor(M).to(torch.float32)
   LLVM ERROR: out of memory
   Aborted (core dumped)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on issue #7971: Runtime error when tracing maskrcnn model: Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions

Posted by GitBox <gi...@apache.org>.
masahi commented on issue #7971:
URL: https://github.com/apache/tvm/issues/7971#issuecomment-832174435


   You need to use `model = TraceWrapper(model_func(pretrained=True))`. The output from pytorch maskrcnn model needs to be a tensor or a tuple of tensors. `TraceWrapper` makes sure the output is in a format pytorch tracing supports.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] abdulazizm commented on issue #7971: Runtime error when tracing maskrcnn model: Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions

Posted by GitBox <gi...@apache.org>.
abdulazizm commented on issue #7971:
URL: https://github.com/apache/tvm/issues/7971#issuecomment-992568025


   > I will reformulate the question in a new issue, since this one is already closed...
   
   Did you raise the issue for this query?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org