You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/06/01 21:21:53 UTC

[GitHub] [tvm] comaniac commented on a change in pull request #8172: [BYOC][TensorRT] Reuse TRT engines based on max_batch_size for dynamic batching, improve device buffer allocation

comaniac commented on a change in pull request #8172:
URL: https://github.com/apache/tvm/pull/8172#discussion_r643480880



##########
File path: src/runtime/contrib/tensorrt/tensorrt_runtime.cc
##########
@@ -64,7 +64,8 @@ class TensorRTRuntime : public JSONRuntimeBase {
                            const Array<String>& const_names)
       : JSONRuntimeBase(symbol_name, graph_json, const_names),
         use_implicit_batch_(true),
-        max_workspace_size_(size_t(1) << 30) {}
+        max_workspace_size_(size_t(1) << 30),
+        highest_batch_size_(-1) {}

Review comment:
       Better to make it consistent.
   ```suggestion
           max_batch_size_(-1) {}
   ```

##########
File path: src/runtime/contrib/tensorrt/tensorrt_runtime.cc
##########
@@ -221,10 +248,15 @@ class TensorRTRuntime : public JSONRuntimeBase {
     }
 
     // Build engine.
-    trt_engine_cache_[std::make_pair(symbol_name_, batch_size_)] = builder.BuildEngine();
+    trt_engine_cache_[std::make_pair(symbol_name_, batch_size)] = builder.BuildEngine();
     DLOG(INFO) << "Finished building TensorRT engine for subgraph " << symbol_name_
-               << " with batch size " << batch_size_;
+               << " with batch size " << batch_size;
+    // Update highest batch size.
+    if (batch_size > highest_batch_size_) {
+      highest_batch_size_ = batch_size;
+    }

Review comment:
       nit
   ```suggestion
       highest_batch_size_ = (batch_size > highest_batch_size_) ? batch_size : highest_batch_size_;
   ```

##########
File path: src/runtime/contrib/tensorrt/tensorrt_runtime.cc
##########
@@ -174,25 +176,50 @@ class TensorRTRuntime : public JSONRuntimeBase {
       int binding_index = engine->getBindingIndex(name.c_str());
       ICHECK_NE(binding_index, -1);
       if (data_entry_[eid]->device.device_type != kDLCUDA) {
-        device_buffers[binding_index].CopyTo(const_cast<DLTensor*>(data_entry_[eid]));
+        auto device_buffer = GetOrAllocateDeviceBuffer(eid, binding_index);
+        device_buffer.CopyTo(const_cast<DLTensor*>(data_entry_[eid]));
       }
     }
   }
 
  private:
+  /*! \brief Get batch size for engine from the runtime input shapes. */
+  int GetBatchSize() {
+    return data_entry_[input_var_eid_[0]]->ndim == 0 ? 1 : data_entry_[input_var_eid_[0]]->shape[0];
+  }
+
+  /*! \brief TensorRT engines are built for a maximum batch size. If an engine doesn't exist for a
+   * certain batch size already, see if we can reuse an engine built for a higher batch size. */
+  bool FindCompatibleEngine(int batch_size, int* compatible_engine_batch_size) {
+    // Check for exact match
+    if (trt_engine_cache_.count(std::make_pair(symbol_name_, batch_size))) {
+      *compatible_engine_batch_size = batch_size;
+      return true;
+    }

Review comment:
       1. IIUC, seems like you don't need the exact match, so we can rely on the following logic and remove this one. This can also reduce the number of cached engines.
   2. Accordingly, can we only keep one engine with the so far largest batch size? i.e., after we build a new engine with larger batch size, can we throw away the old one?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org