You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/11/07 16:55:36 UTC

[GitHub] [incubator-tvm] trevor-m commented on a change in pull request #6872: [BYOC][TRT] Allocate GPU data buffers and transfer data when needed

trevor-m commented on a change in pull request #6872:
URL: https://github.com/apache/incubator-tvm/pull/6872#discussion_r519196129



##########
File path: src/runtime/contrib/tensorrt/tensorrt_runtime.cc
##########
@@ -106,9 +104,11 @@ class TensorRTRuntime : public JSONRuntimeBase {
 #ifdef TVM_GRAPH_RUNTIME_TENSORRT
   /*! \brief Run inference using built engine. */
   void Run() override {
+    BuildEngine();

Review comment:
       Thanks @comaniac for the review! Yes, to allocate the device buffers we need the DLTensor context and shape. `data_entry_` in JSON runtime isn't initialized until `Run()` so I had to move BuildEngine.
   
   In the future, we are planning to be able to dynamically build engines for different input shapes in order to handle subgraphs with dynamic input sizes, so moving it would be needed for that anyway.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org