You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/02/19 06:24:20 UTC

[GitHub] [tvm] VertexC opened a new issue #7476: onnx model failed when relay.build()

VertexC opened a new issue #7476:
URL: https://github.com/apache/tvm/issues/7476


   Hi,
   
   I am trying to build onnx model with tvm but got following error
   ```bash
   Check failed: oshape_sum == data_shape_sum (2048 vs. 131072) : Input tensor shape and reshaped shape are not compatible
   ```
   The value `131072` will change if I change the `batch_size` of following code
   ```python
   import os
   import time
   
   import onnx
   import numpy as np
   import tvm
   from tvm import te
   import tvm.relay as relay
   from tvm.contrib import graph_runtime
   
   import util
   
   
   def onnx2tvm_runner(model_name, batch_size=1, backend='cuda'):
       model, shape = util.onnx_model(model_name)
   
       data = np.random.rand(batch_size, *shape)
       input_name = model.graph.input[0].name
   
       shape_dict = {input_name: tuple([batch_size, *shape])}
       mod, params = relay.frontend.from_onnx(model, shape_dict)
   
       # TODO: how opt_level affects performance
       opt_level = 3
       if backend == 'llvm':
           with tvm.transform.PassContext(opt_level=opt_level):
               lib = relay.build(mod,
                                 target='llvm',
                                 target_host='llvm',
                                 params=params)
   
           ctx = tvm.cpu()
           module = graph_runtime.GraphModule(lib["default"](ctx))
           module.set_input(input_name, data)
       else:
           target = tvm.target.cuda()
           with tvm.transform.PassContext(opt_level=opt_level):
               lib = relay.build(mod, target, params=params)
   
           ctx = tvm.gpu()
           module = graph_runtime.GraphModule(lib["default"](ctx))
           module.set_input(input_name, data)
   
       dtype = "float32"
       data = tvm.nd.array(data.astype(dtype))
   
       def runner(data_size):
           for _ in range(data_size // batch_size):
               module.set_input(input_name, data)
               module.run()
   
       return runner
   
   
   if __name__ == "__main__":
       import argparse
       parser = argparse.ArgumentParser(description="benchmark of onnx/tvm")
       parser.add_argument("model", help="onnx model name")
       parser.add_argument("--backend",
                           choices=['cuda', 'llvm'],
                           default='cuda',
                           help='backend target to run')
       parser.add_argument("--batch", type=int, default=1, help='batch size')
       parser.add_argument("--size", type=int, default=256, help='data size')
       args = parser.parse_args()
   
       os.environ['TVM_BACKTRACE'] = '1'
       runner = onnx2tvm_runner(args.model,
                                batch_size=args.batch,
                                backend=args.backend)
       duration = util.simple_bench(runner, args.size)
       print(duration)
   ```
   
   The models I used are 
   ```bash
   #!/bin/bash
   SCRIPT=$(readlink -f "$0")
   DIR=$(dirname "$SCRIPT")
   echo $DIR
   if [ ! -f ${DIR}/resnet50.onnx ]; then
       wget -O ${DIR}/resnet50.onnx https://github.com/onnx/models/raw/master/vision/classification/resnet/model/resnet50-caffe2-v1-9.onnx
   fi
   if [ ! -f ${DIR}/mobilenet.onnx ]; then
       wget -O ${DIR}/mobilenet.onnx https://github.com/onnx/models/raw/master/vision/classification/mobilenet/model/mobilenetv2-7.onnx
   fi
   if [ ! -f ${DIR}/vgg16.onnx ]; then
       wget -O ${DIR}/vgg16.onnx https://github.com/onnx/models/raw/master/vision/classification/vgg/model/vgg16-7.onnx
   fi
   if [ ! -f ${DIR}/inception.onnx ]; then
       wget -O ${DIR}/inception.onnx https://github.com/onnx/models/raw/master/vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-9.onnx
   fi
   ```
   
   Meanwhile, when `batch_size=1`, all models built successfully. When batch_size is not 1, only vgg16 pass the build but all others (resnet50/mobilenet/inception) failed. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen closed issue #7476: onnx model failed when relay.build()

Posted by GitBox <gi...@apache.org>.
tqchen closed issue #7476:
URL: https://github.com/apache/tvm/issues/7476


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen commented on issue #7476: onnx model failed when relay.build()

Posted by GitBox <gi...@apache.org>.
tqchen commented on issue #7476:
URL: https://github.com/apache/tvm/issues/7476#issuecomment-785403452


   please open a new troubleshooting thread on https://discuss.tvm.apache.org/


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org