You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/04/02 04:06:42 UTC

[GitHub] [tvm] apivovarov opened a new pull request #10882: [LLVM] Support CodeGenBlob for large >2GB models

apivovarov opened a new pull request #10882:
URL: https://github.com/apache/tvm/pull/10882


   If large const data (>2GB) is saved to default `.rodata` section then linking it to shared library will fail - relocation truncated to fit: R_X86_64_PC32.
   The issue exists on Linux x86_64 platform.
   GCC handles this situation by using `-mcmodel=medium` parameter but LLVM ignores it.
   The workaround is to explicitly put large const data to `.lrodata` section.
   Lets put const data which is larger than 1GB to `.lrodata` section.
   
   Model compilation and execution was tested on g4dn with large MXNet model:
   [gen-large-mxnet-model-py](https://gist.github.com/apivovarov/3a871b4467b819cf29f7087ccd5f6b8f#file-gen-large-mxnet-model-py)
   
   Serialized TensorRT TVM Module was saved to `model.so` file. (size is greater than 2GB).
   ```
   2,361,256,184  model.so
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on pull request #10882: [LLVM] Support CodeGenBlob for large >2GB models

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on pull request #10882:
URL: https://github.com/apache/tvm/pull/10882#issuecomment-1086596133


   Could we use `setCodeModel` to solve this issue ? @apivovarov 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org