You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/07/29 07:41:00 UTC

[GitHub] [incubator-tvm] tiandiao123 commented on issue #1027: TVMError: src/runtime/cuda/cuda_module.cc:93: CUDAError: cuModuleLoadData(&(module_[device_id]), data_.c_str()) failed with error: CUDA_ERROR_INVALID_PTX

tiandiao123 commented on issue #1027:
URL: https://github.com/apache/incubator-tvm/issues/1027#issuecomment-664823994


   > I get the exact same issue.
   > 
   > jetson@jetson:~/fast-depth/deploy$ python3 tx2_run_tvm.py --input-fp data/rgb.npy --output-fp data/pred.npy --model-dir ../results/tvm_compiled/tx2_gpu_mobilenet_nnconv5dw_skipadd_pruned/ --cuda True
   > => [TVM on TX2] using model files in ../results/tvm_compiled/tx2_gpu_mobilenet_nnconv5dw_skipadd_pruned/
   > => [TVM on TX2] loading model lib and ptx
   > => [TVM on TX2] loading model graph and params
   > => [TVM on TX2] creating TVM runtime module
   > => [TVM on TX2] feeding inputs and params into TVM module
   > => [TVM on TX2] running TVM module, saving output
   > Traceback (most recent call last):
   > 
   > File "tx2_run_tvm.py", line 91, in
   > main()
   > 
   > File "tx2_run_tvm.py", line 88, in main
   > run_model(args.model_dir, args.input_fp, args.output_fp, args.warmup, args.run, args.cuda, try_randin=args.randin)
   > 
   > File "tx2_run_tvm.py", line 36, in run_model
   > run() # not gmodule.run()
   > 
   > File "/home/jetson/tvm/python/tvm/_ffi/_ctypes/function.py", line 207, in **call**
   > raise get_last_ffi_error()
   > 
   > tvm._ffi.base.TVMError: Traceback (most recent call last):
   > [bt] (3) /home/jetson/tvm/build/libtvm.so(TVMFuncCall+0x70) [0x7fad7ccec0]
   > [bt] (2) /home/jetson/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::detail::PackFuncVoidAddr_<4, tvm::runtime::CUDAWrappedFunc>(tvm::runtime::CUDAWrappedFunc, std::vector<tvm::runtime::detail::ArgConvertCode, std::allocatortvm::runtime::detail::ArgConvertCode > const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xe8) [0x7fad850b08] [bt] (1) /home/jetson/tvm/build/libtvm.so(tvm::runtime::CUDAWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*, void**) const+0x6cc) [0x7fad85093c] [bt] (0) /home/jetson/tvm/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x4c) [0x7facfdebac] File "/home/jetson/tvm/src/runtime/cuda/cuda_module.cc", line 110 File "/home/jetson/tvm/src/runtime/library_module.cc", line 91 CUDAError: Check failed: ret == 0 (-1 vs. 0) : cuModuleLoadData(&(module_[d
 evice_id]), data_.c_str()) failed with error: CUDA_ERROR_INVALID_PTX
   > 
   > Still haven't found a solution to it. I am runnig it on a Jetson Nano. Please help.
   
   did you find some solution? I have exact same issue. I don't know how to fix it, could you help me? 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org