You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/10/08 10:24:02 UTC

[GitHub] [tvm] kongroo commented on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

kongroo commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-938528754


   I've fixed some namespace and style issues.  Could you please help review this PR? @junrushao1994 @jcf94 @msakai @jroesch 
   
   And I have some questions to discuss:
   1. The `forward` function is not thread-safe. Should we use a mutex to make it thread-safe?
   2. We load the tvm module from files (mod.so, graph.json, params). But if we pass the relative path of the .so file, it may cause unexpected results. Consider this case: we have `export_dir1/mod.so` and `export_dir2/mod.so`,  chdir into `export_dir1` and load `./mod.so`, then chdir into `export_dir2` and try to load `./mod.so`, but `export_dir2/mod.so` will not be loaded! One possible solution is to translate the filepath to absolute path before `dlopen` in `src/runtime/dso_library.cc`. What's your opinion?
   3. We store tvm graph modules in a map `tvm_modules_` and use input tensors' shapes as the key. But this requires all the input tensors to have a fixed shape. In order to support dynamic shapes, we may need to iterate all the keys of `tvm_modules_` to find a matched one. Is it necessary to support dynamic shapes? If it is, how can we do it efficiently?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org