You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/09/22 12:35:05 UTC

[GitHub] [tvm] Meteorix opened a new pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Meteorix opened a new pull request #8777:
URL: https://github.com/apache/tvm/pull/8777


   To increase the TVM accessibility for PyTorch users, we add PyTorchTVM module to support the following workflow:
   1. convert a torchscript module to tvm graph
   2. build and tune tvm graph
   3. export well-tuned tvm graph as a pytorch op
   4. torch jit trace the tvm pytorch op with other pytorch modules, then save/load/serve as normal pytorch model
   
   The example usage is here: [apps/pt_class/tests/test_pt_script.py](https://github.com/Meteorix/tvm/blob/meteorix_main_2/apps/pt_class/tests/test_pt_script.py). We hope to further discuss the user api with the community. Please help review @Laurawly  @junrushao1994 @tqchen, thanks! 
   
   Credit: the original author is @kongroo .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-949117502


   Sorry I forgot about this PR, will take another look soon. cc @junrushao1994 @jroesch 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] kongroo commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
kongroo commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r740012069



##########
File path: python/tvm/contrib/torch/__init__.py
##########
@@ -0,0 +1,50 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Module container of Pytorch custom class"""
+import os
+import platform
+import torch
+from tvm._ffi import libinfo
+from tvm.relay.frontend import pytorch
+
+
+def _load_platform_specific_library(lib_name="libpt_tvmdsoop"):
+    system = platform.system()
+    if system == "Darwin":
+        lib_file_name = lib_name + ".dylib"
+    elif system == "Windows":
+        lib_file_name = lib_name + ".dll"
+    else:
+        lib_file_name = lib_name + ".so"
+    lib_path = libinfo.find_lib_path()[0]
+    lib_dir = os.path.dirname(lib_path)
+    lib_file_path = os.path.join(lib_dir, lib_file_name)
+    torch.classes.load_library(lib_file_path)
+
+
+_load_platform_specific_library()
+
+from . import module  # nopep8, pylint: disable=wrong-import-position
+
+GraphModule = module.GraphModule
+VMModule = module.VMModule
+TraceTvmModule = module.TraceTvmModule
+
+from . import pytorch_tvm  # nopep8, pylint: disable=wrong-import-position
+
+PyTorchTVMModule = pytorch_tvm.PyTorchTVMModule
+compile = pytorch_tvm.compile  # pylint: disable=redefined-builtin,invalid-name

Review comment:
       resolved




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] kongroo commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
kongroo commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r740012399



##########
File path: python/tvm/contrib/torch/__init__.py
##########
@@ -0,0 +1,50 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Module container of Pytorch custom class"""
+import os
+import platform
+import torch
+from tvm._ffi import libinfo
+from tvm.relay.frontend import pytorch
+
+
+def _load_platform_specific_library(lib_name="libpt_tvmdsoop"):
+    system = platform.system()
+    if system == "Darwin":
+        lib_file_name = lib_name + ".dylib"
+    elif system == "Windows":
+        lib_file_name = lib_name + ".dll"

Review comment:
       Yes, we only tested on linux




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r713852344



##########
File path: apps/pt_class/tests/test_pt_compile.py
##########
@@ -0,0 +1,60 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Test script for tf op module"""

Review comment:
       pt




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] Meteorix commented on pull request #8777: [PyTorch][WIP]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
Meteorix commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-924707997


   > @Meteorix any updates on when this will be ready for review? happy to help shepherd these changes and work with you to get them merged.
   
   Yes, it's ready for review! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] junrushao1994 commented on pull request #8777: [PyTorch][WIP]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
junrushao1994 commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-900830923


   CC @masahi @alexwong @comaniac @yzhliu 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] kongroo commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
kongroo commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r735330004



##########
File path: src/contrib/torch/pt_call_tvm/tvm_class.cc
##########
@@ -0,0 +1,686 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+#include <dlpack/dlpack.h>
+#include <torch/custom_class.h>
+#include <torch/script.h>
+#include <tvm/runtime/container/adt.h>
+#include <tvm/runtime/device_api.h>
+#include <tvm/runtime/module.h>
+#include <tvm/runtime/packed_func.h>
+#include <tvm/runtime/registry.h>
+#include <tvm/runtime/vm/vm.h>
+
+#include <map>
+#include <string>
+#include <vector>
+
+#include "../utils.h"
+
+namespace tvm {
+namespace contrib {
+namespace pytorch {
+
+/*! \brief Class holding necessary components to call TVM graph runtime */
+class TvmGraphModulePack {
+ public:
+  /*!
+   * \brief Constructor.
+   *
+   * \param path Encoded path of graph runtime assets.
+   * \param device_type int64_t, kDLCPU or kDLCUDA.
+   * \param device_id int64_t.
+   */
+  explicit TvmGraphModulePack(std::string path, int64_t device_type, int64_t device_id)
+      : path_(std::move(path)) {
+    LOG(INFO) << "[TvmGraphModule] loading module at path: [" << path_ << "] on device ["
+              << (device_type == kDLCUDA ? "cuda:" : "cpu:") << device_id << "]...";
+    std::string lib_path, graph_path, params_path;
+    DecodePaths(path_, &lib_path, &graph_path, &params_path);
+
+    // load graph
+    std::ifstream graph_in(graph_path);
+    std::string graph_data((std::istreambuf_iterator<char>(graph_in)),
+                           std::istreambuf_iterator<char>());
+    graph_in.close();
+
+    // load mod syslib
+    tvm::runtime::Module lib = tvm::runtime::Module::LoadFromFile(lib_path);
+
+    const auto runtime_create = *tvm::runtime::Registry::Get("tvm.graph_executor.create");
+
+    // read params data
+    std::ifstream params_in(params_path, std::ios::binary);
+    std::string params_data((std::istreambuf_iterator<char>(params_in)),
+                            std::istreambuf_iterator<char>());
+    params_in.close();
+    TVMByteArray params_arr;
+    params_arr.data = params_data.c_str();
+    params_arr.size = params_data.length();
+
+    // set devices
+    module_ = runtime_create(graph_data, lib, device_type, device_id);
+    const tvm::runtime::PackedFunc load_params = module_.GetFunction("load_params");
+    load_params(params_arr);
+
+    set_input = module_.GetFunction("set_input_zero_copy");
+    run = module_.GetFunction("run");
+    get_output = module_.GetFunction("get_output");
+    set_output = module_.GetFunction("set_output_zero_copy");
+    num_outputs_ = module_.GetFunction("get_num_outputs")();
+  }
+
+  static constexpr char kPathDelimiter = '|';
+
+  /*!
+   * \brief Decode lib_path, graph_path, params_path from encoded path.
+   *
+   * \param path The encoded path, concated with `kPathDelimiter`.
+   * \param lib_path The path of .so lib file.
+   * \param graph_path The path of graph.json.
+   * \param params_path The path of params data.
+   */
+  static void DecodePaths(const std::string& path, std::string* lib_path, std::string* graph_path,
+                          std::string* params_path) {
+    std::vector<std::string> paths;
+    for (size_t i = 0, pre = 0, lim = path.size(); i <= lim; ++i) {
+      if (i == lim || path.at(i) == kPathDelimiter) {
+        paths.push_back(path.substr(pre, i - pre));
+        pre = i + 1;
+      }
+    }
+    CHECK_EQ(paths.size(), 3u);
+    *lib_path = paths.at(0);
+    *graph_path = paths.at(1);
+    *params_path = paths.at(2);
+  }
+
+  /*!
+   * \brief Encode lib_path, graph_path, params_path by concat then with `kPathDelimiter`.
+   *
+   * \param lib_path The path of .so lib file.
+   * \param graph_path The path of graph.json.
+   * \param params_path The path of params data.
+   *
+   * \return The encoded path, concated with `kPathDelimiter`.
+   */
+  static std::string EncodePaths(const std::string& lib_path, const std::string& graph_path,
+                                 const std::string& params_path) {
+    return lib_path + kPathDelimiter + graph_path + kPathDelimiter + params_path;
+  }
+
+  const std::string& path() const { return path_; }
+
+  const int64_t num_outputs() const { return num_outputs_; }
+
+  tvm::runtime::PackedFunc set_input;
+  tvm::runtime::PackedFunc run;
+  tvm::runtime::PackedFunc get_output;
+  tvm::runtime::PackedFunc set_output;
+
+ private:
+  tvm::runtime::Module module_;
+  int64_t num_outputs_;
+  std::string path_;
+};
+
+/*! \brief Class holding necessary components to call TVM VM runtime */
+class TvmVMModulePack {
+ public:
+  /*!
+   * \brief Constructor.
+   *
+   * \param path Encoded path of vm runtime assets.
+   * \param device_type int64_t, kDLCPU or kDLCUDA.
+   * \param device_id int64_t.
+   */
+  explicit TvmVMModulePack(std::string path, int64_t device_type, int64_t device_id)
+      : path_(std::move(path)) {
+    LOG(INFO) << "[TvmVMModule] loading module at path: [" << path_ << "] on device ["
+              << (device_type == kDLCUDA ? "cuda:" : "cpu:") << device_id << "]...";
+    // build tvm graph runtime
+    std::string lib_path, code_path;
+    DecodePaths(path_, &lib_path, &code_path);
+    // load lib
+    auto loaded_lib = tvm::runtime::Module::LoadFromFile(lib_path, "so");
+    // load code
+    std::ifstream code_in(code_path);
+    std::string loaded_code((std::istreambuf_iterator<char>(code_in)),
+                            std::istreambuf_iterator<char>());
+    code_in.close();
+    exe_ = tvm::runtime::vm::Executable::Load(loaded_code, loaded_lib);
+    const auto runtime_create = *tvm::runtime::Registry::Get("runtime._VirtualMachine");
+    vm_ = runtime_create(exe_);
+    auto init_func = vm_.GetFunction("init", false);
+    auto alloc_type = static_cast<int>(tvm::runtime::vm::AllocatorType::kPooled);
+    if (device_type != kDLCPU) {
+      // CPU is required for executing shape functions
+      init_func(static_cast<int>(kDLCPU), 0, alloc_type, device_type, device_id, alloc_type);
+    } else {
+      init_func(device_type, device_id, alloc_type);
+    }
+    set_input = vm_.GetFunction("set_input", false);
+    invoke = vm_.GetFunction("invoke", false);
+  }
+
+  static constexpr char kPathDelimiter = '|';
+
+  /*!
+   * \brief Decode lib_path, code_path from encoded path.
+   *
+   * \param path The encoded path, concated with `kPathDelimiter`.
+   * \param lib_path The path of lib file.
+   * \param code_path The path of code file.
+   */
+  static void DecodePaths(const std::string& path, std::string* lib_path, std::string* code_path) {
+    std::vector<std::string> paths;
+    for (size_t i = 0, pre = 0, lim = path.size(); i <= lim; ++i) {
+      if (i == lim || path.at(i) == kPathDelimiter) {
+        paths.push_back(path.substr(pre, i - pre));
+        pre = i + 1;
+      }
+    }
+    CHECK_EQ(paths.size(), 2u);
+    *lib_path = paths.at(0);
+    *code_path = paths.at(1);
+  }
+
+  /*!
+   * \brief Encode lib_path, code_path by concat then with `kPathDelimiter`.
+   *
+   * \param lib_path The path of vm lib file.
+   * \param code_path The path of code.
+   *
+   * \return The encoded path, concated with `kPathDelimiter`.
+   */
+  static std::string EncodePaths(const std::string& lib_path, const std::string& code_path) {
+    return lib_path + kPathDelimiter + code_path;
+  }
+
+  const std::string& path() const { return path_; }
+
+  tvm::runtime::PackedFunc set_input;
+  tvm::runtime::PackedFunc invoke;
+
+ private:
+  tvm::runtime::Module exe_;
+  tvm::runtime::Module vm_;
+  std::string path_;
+};
+
+/*! \brief Pytorch custom class to call TVM */
+class BaseTvmClass : public torch::jit::CustomClassHolder {
+ public:
+  /*!
+   * \brief Constructor.
+   *
+   * \param num_inputs Number of inputs.
+   * \param num_outputs Number of outputs.
+   * \param device std::string, use the pytorch device str format, e.g. `cuda:0`, 'cpu'
+   */
+  BaseTvmClass(const int64_t num_inputs, const int64_t num_outputs, const std::string& device)
+      : num_inputs_(num_inputs), num_outputs_(num_outputs) {
+    auto torch_device = torch::Device(device);
+    device_type_ = torch_device.is_cuda() ? kDLCUDA : kDLCPU;
+    device_id_ = torch_device.index();
+  }
+
+  /*! \brief Virtual destructor. */
+  virtual ~BaseTvmClass() {}
+
+  /*!
+   * \brief Get repr string of pytorch input shapes.
+   *
+   * \param shapes Pytorch shapes of type List[List[int]].
+   *
+   * \return std::string, the representation of inputs shapes.
+   */
+  static std::string TvmShapeRepr(const c10::List<c10::List<int64_t>>& shapes) {
+    std::stringstream ss;
+    for (const auto& shape : shapes) {
+      for (const auto& sz : static_cast<c10::List<int64_t>>(shape)) {
+        ss << sz << "_";
+      }
+      ss << "__";
+    }
+    return ss.str();
+  }
+
+  /*!
+   * \brief Get input shapes.
+   *
+   * \param inputs Inputs with type List[Tensor].
+   *
+   * \return outputs with type List[List[int]].
+   */
+  static c10::List<c10::List<int64_t>> GetShapes(const c10::List<at::Tensor>& inputs) {
+    c10::List<c10::List<int64_t>> shapes;
+    for (const auto& input : inputs) {
+      c10::List<int64_t> shape;
+      for (const auto sz : static_cast<at::Tensor>(input).sizes()) {
+        shape.push_back(sz);
+      }
+      shapes.push_back(shape);
+    }
+    return shapes;
+  }
+
+  /*!
+   * \brief Move the TVM modules to given device.
+   *
+   * \param device String repr of the device to be moved to.
+   */
+  virtual void to(const std::string& device) = 0;
+
+  // getters
+  int64_t num_inputs() const { return num_inputs_; }
+
+  int64_t num_outputs() const { return num_outputs_; }
+
+  int64_t device_type() const { return device_type_; }
+
+  int64_t device_id() const { return device_id_; }
+
+  c10::DeviceType torch_device_type() const {
+    return device_type() == kDLCUDA ? torch::DeviceType::CUDA : torch::DeviceType::CPU;
+  }
+
+  bool is_on_same_device(const torch::Tensor& tensor) const {
+    auto tensor_device_type = tensor.device().type();
+    if (tensor_device_type == torch::DeviceType::CUDA) {
+      return tensor_device_type == torch_device_type() && device_id() == tensor.device().index();
+    }
+    CHECK_EQ(tensor_device_type, torch::DeviceType::CPU);
+    return tensor_device_type == torch_device_type();
+  }
+
+  std::string device() const { return torch::Device(torch_device_type(), device_id()).str(); }
+
+  /*!
+   * \brief Module forward.
+   *
+   * \param inputs Inputs with type List[Tensor].
+   *
+   * \return outputs with type List[Tensor].
+   */
+  virtual c10::List<at::Tensor> forward(const c10::List<at::Tensor>& inputs) = 0;
+
+  /*!
+   * \brief Serialize TVM Modules to Dict<string, string>
+   */
+  virtual c10::Dict<std::string, std::string> SerializeTvmModules() const = 0;
+
+  /*!
+   * \brief deserialize TVM Modules from Dict<string, string>
+   */
+  virtual void DeserializeTvmModules(const c10::Dict<std::string, std::string>& shape_path_map) = 0;
+
+ protected:
+  const int64_t num_inputs_;
+  const int64_t num_outputs_;
+  int64_t device_type_;
+  int64_t device_id_;
+};
+
+/*! \brief Pytorch custom class to call TVM graph runtime */
+class TvmGraphRuntimeClass : public BaseTvmClass {
+ public:
+  TvmGraphRuntimeClass(const int64_t num_inputs, const int64_t num_outputs,
+                       const std::string& device)
+      : BaseTvmClass(num_inputs, num_outputs, device) {}
+
+  /*!
+   * \brief Module forward.
+   *
+   * \param inputs Inputs with type List[Tensor].
+   *
+   * \return outputs with type List[Tensor].
+   */
+  c10::List<at::Tensor> forward(const c10::List<at::Tensor>& inputs) override {
+    CHECK_EQ(inputs.size(), num_inputs_);
+    auto shape_repr = TvmShapeRepr(GetShapes(inputs));
+    std::vector<DLTensor> args(num_inputs_ + num_outputs_);
+    auto iter = tvm_modules_.find(shape_repr);
+    CHECK(iter != tvm_modules_.end());
+    const auto& tvm_pack = iter->second;
+    std::vector<TensorAsBuf> buf_infos;
+    buf_infos.reserve(num_inputs_ + num_outputs_);
+
+    for (int i = 0; i < num_inputs_; ++i) {
+      at::Tensor inp = inputs[i];
+      CHECK(is_on_same_device(inp))
+          << "input #" << i
+          << " of forward is not on the same device with TvmGraphRuntime, expected " << device()
+          << " but got " << inp.device().str();
+      inp = inp.contiguous();
+      buf_infos.emplace_back(inp);
+      auto& input_buf = buf_infos[i];
+      input_buf.CopyFromOrigin();
+      input_buf.MakeDLTensor(&args[i]);
+      tvm_pack.set_input(i, &args[i]);

Review comment:
       No, we use `set_input_zero_copy` for `set_input` function. 
   ```cpp
   set_input = module_.GetFunction("set_input_zero_copy");
   ```
   And `CopyFromOrigin` also doesn't always copy data.
   ```cpp
     void CopyFromOrigin() {
       if (buf_ == origin_buf_) {
         return;
       }
      ...
   }
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] kongroo commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
kongroo commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r740012664



##########
File path: python/tvm/contrib/torch/pytorch_tvm.py
##########
@@ -0,0 +1,226 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""`compile` api that convert torch module to torch tvm module"""
+import os
+import tvm
+import tvm.testing
+from tvm import relay, autotvm
+from tvm.runtime import load_module
+from tvm.autotvm.tuner import XGBTuner, GATuner, RandomTuner, GridSearchTuner
+from tvm.contrib import graph_executor
+from tvm.contrib.debugger import debug_executor
+from . import GraphModule
+
+
+def tune_tasks(
+    tasks,
+    measure_option,
+    tuner="xgb",
+    n_trial=1000,
+    early_stopping=None,
+    log_filename="tuning.log",
+    use_transfer_learning=True,
+):
+    """Tune tasks and generate tuning log to file"""
+    # create tmp log file
+    tmp_log_file = log_filename + ".tmp"
+    if os.path.exists(tmp_log_file):
+        os.remove(tmp_log_file)
+
+    for i, tsk in enumerate(reversed(tasks)):
+        prefix = f"[Task {i + 1:2d}/{len(tasks):2d}] "
+
+        # create tuner
+        if tuner in ("xgb", "sgb-rank"):
+            tuner_obj = XGBTuner(tsk, loss_type="rank")
+        elif tuner == "ga":
+            tuner_obj = GATuner(tsk, pop_size=100)
+        elif tuner == "random":
+            tuner_obj = RandomTuner(tsk)
+        elif tuner == "gridsearch":
+            tuner_obj = GridSearchTuner(tsk)
+        else:
+            raise ValueError("Invalid tuner: " + tuner)
+
+        if use_transfer_learning:
+            if os.path.isfile(tmp_log_file):
+                tuner_obj.load_history(autotvm.record.load_from_file(tmp_log_file))
+
+        # do tuning
+        tsk_trial = min(n_trial, len(tsk.config_space))
+        tuner_obj.tune(
+            n_trial=tsk_trial,
+            early_stopping=early_stopping,
+            measure_option=measure_option,
+            callbacks=[
+                autotvm.callback.progress_bar(tsk_trial, prefix=prefix),
+                autotvm.callback.log_to_file(tmp_log_file),
+            ],
+        )
+
+    # pick best records to a cache file
+    autotvm.record.pick_best(tmp_log_file, log_filename)
+    os.remove(tmp_log_file)
+
+
+def get_tuning_opt(log_file="tuning.log", n_trial=200):
+    """Returns tuning options"""
+    tuning_opt = {
+        "log_filename": log_file,
+        "tuner": "random",
+        "n_trial": n_trial,
+        "early_stopping": 60,
+        "measure_option": autotvm.measure_option(
+            builder=autotvm.LocalBuilder(timeout=10),
+            runner=autotvm.LocalRunner(number=20, repeat=3, timeout=4, min_repeat_ms=150),
+        ),
+    }
+    return tuning_opt
+
+
+TVM_ASSETS = ["mod.so", "graph.json", "params"]
+
+
+class PyTorchTVMModule:
+    """Helper class for compiling pytorch module to tvm module"""
+
+    def __init__(self) -> None:
+        self.script_module = None
+        self.input_infos = None
+        self.default_dtype = "float32"
+        self.mod = None
+        self.params = None
+        self.tasks = None
+        self.target = "cuda"
+        self.dev = tvm.cuda(0)

Review comment:
       Added arguments for target and device




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] Meteorix removed a comment on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
Meteorix removed a comment on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-924887935


   > reasonable
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen commented on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
tqchen commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-926625279


   Coming late to the discussion, thanks for the great work. It would be nice to discuss the naming a bit. Given we are moving towards a serious first class PT support. Some namespace ideas:
   
   - tvm.contrib.pt_op
   - tvm.contrib.pytorch
   - tvm.contrib.torch
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] Meteorix commented on pull request #8777: [PyTorch][WIP]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
Meteorix commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-901645767


   Thanks, I will write an RFC this week.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] Meteorix closed pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
Meteorix closed pull request #8777:
URL: https://github.com/apache/tvm/pull/8777


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] Meteorix commented on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
Meteorix commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-924887935


   > reasonable
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r713797109



##########
File path: apps/pt_class/CMakeLists.txt
##########
@@ -0,0 +1,34 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+cmake_minimum_required(VERSION 3.2)
+project(tf_tvmdsoop C CXX)
+
+set(TFTVM_COMPILE_FLAGS -std=c++14)

Review comment:
       Update 'TF' or `tf` references




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r740669880



##########
File path: python/tvm/contrib/torch/module.py
##########
@@ -0,0 +1,121 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Module container of PyTorch custom class"""
+from typing import List
+import torch
+
+
+class GraphModule(torch.nn.Module):
+    r"""Module container of Pytorch class which wraps exported
+    TVM op implementation library to be called on Pytorch side"""
+
+    @classmethod
+    def shape_repr(cls, input_shapes):
+        return torch.ops.tvm_dsoop.tvm_shape_repr(input_shapes)
+
+    def __init__(self, num_inputs, num_outputs, device=None):
+        super().__init__()
+        self.dummy_param = torch.nn.Parameter(torch.empty(0))
+        self.engine = None
+
+        if device is not None:
+            self.to(device)

Review comment:
       ok, this is the initial PR anyway, I think it is fine to assume that users are responsible for this.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r735309031



##########
File path: src/contrib/torch/pt_call_tvm/tvm_class.cc
##########
@@ -0,0 +1,686 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+#include <dlpack/dlpack.h>
+#include <torch/custom_class.h>
+#include <torch/script.h>
+#include <tvm/runtime/container/adt.h>
+#include <tvm/runtime/device_api.h>
+#include <tvm/runtime/module.h>
+#include <tvm/runtime/packed_func.h>
+#include <tvm/runtime/registry.h>
+#include <tvm/runtime/vm/vm.h>
+
+#include <map>
+#include <string>
+#include <vector>
+
+#include "../utils.h"
+
+namespace tvm {
+namespace contrib {
+namespace pytorch {
+
+/*! \brief Class holding necessary components to call TVM graph runtime */
+class TvmGraphModulePack {
+ public:
+  /*!
+   * \brief Constructor.
+   *
+   * \param path Encoded path of graph runtime assets.
+   * \param device_type int64_t, kDLCPU or kDLCUDA.
+   * \param device_id int64_t.
+   */
+  explicit TvmGraphModulePack(std::string path, int64_t device_type, int64_t device_id)
+      : path_(std::move(path)) {
+    LOG(INFO) << "[TvmGraphModule] loading module at path: [" << path_ << "] on device ["
+              << (device_type == kDLCUDA ? "cuda:" : "cpu:") << device_id << "]...";
+    std::string lib_path, graph_path, params_path;
+    DecodePaths(path_, &lib_path, &graph_path, &params_path);
+
+    // load graph
+    std::ifstream graph_in(graph_path);
+    std::string graph_data((std::istreambuf_iterator<char>(graph_in)),
+                           std::istreambuf_iterator<char>());
+    graph_in.close();
+
+    // load mod syslib
+    tvm::runtime::Module lib = tvm::runtime::Module::LoadFromFile(lib_path);
+
+    const auto runtime_create = *tvm::runtime::Registry::Get("tvm.graph_executor.create");
+
+    // read params data
+    std::ifstream params_in(params_path, std::ios::binary);
+    std::string params_data((std::istreambuf_iterator<char>(params_in)),
+                            std::istreambuf_iterator<char>());
+    params_in.close();
+    TVMByteArray params_arr;
+    params_arr.data = params_data.c_str();
+    params_arr.size = params_data.length();
+
+    // set devices
+    module_ = runtime_create(graph_data, lib, device_type, device_id);
+    const tvm::runtime::PackedFunc load_params = module_.GetFunction("load_params");
+    load_params(params_arr);
+
+    set_input = module_.GetFunction("set_input_zero_copy");
+    run = module_.GetFunction("run");
+    get_output = module_.GetFunction("get_output");
+    set_output = module_.GetFunction("set_output_zero_copy");
+    num_outputs_ = module_.GetFunction("get_num_outputs")();
+  }
+
+  static constexpr char kPathDelimiter = '|';
+
+  /*!
+   * \brief Decode lib_path, graph_path, params_path from encoded path.
+   *
+   * \param path The encoded path, concated with `kPathDelimiter`.
+   * \param lib_path The path of .so lib file.
+   * \param graph_path The path of graph.json.
+   * \param params_path The path of params data.
+   */
+  static void DecodePaths(const std::string& path, std::string* lib_path, std::string* graph_path,
+                          std::string* params_path) {
+    std::vector<std::string> paths;
+    for (size_t i = 0, pre = 0, lim = path.size(); i <= lim; ++i) {
+      if (i == lim || path.at(i) == kPathDelimiter) {
+        paths.push_back(path.substr(pre, i - pre));
+        pre = i + 1;
+      }
+    }
+    CHECK_EQ(paths.size(), 3u);
+    *lib_path = paths.at(0);
+    *graph_path = paths.at(1);
+    *params_path = paths.at(2);
+  }
+
+  /*!
+   * \brief Encode lib_path, graph_path, params_path by concat then with `kPathDelimiter`.
+   *
+   * \param lib_path The path of .so lib file.
+   * \param graph_path The path of graph.json.
+   * \param params_path The path of params data.
+   *
+   * \return The encoded path, concated with `kPathDelimiter`.
+   */
+  static std::string EncodePaths(const std::string& lib_path, const std::string& graph_path,
+                                 const std::string& params_path) {
+    return lib_path + kPathDelimiter + graph_path + kPathDelimiter + params_path;
+  }
+
+  const std::string& path() const { return path_; }
+
+  const int64_t num_outputs() const { return num_outputs_; }
+
+  tvm::runtime::PackedFunc set_input;
+  tvm::runtime::PackedFunc run;
+  tvm::runtime::PackedFunc get_output;
+  tvm::runtime::PackedFunc set_output;
+
+ private:
+  tvm::runtime::Module module_;
+  int64_t num_outputs_;
+  std::string path_;
+};
+
+/*! \brief Class holding necessary components to call TVM VM runtime */
+class TvmVMModulePack {
+ public:
+  /*!
+   * \brief Constructor.
+   *
+   * \param path Encoded path of vm runtime assets.
+   * \param device_type int64_t, kDLCPU or kDLCUDA.
+   * \param device_id int64_t.
+   */
+  explicit TvmVMModulePack(std::string path, int64_t device_type, int64_t device_id)
+      : path_(std::move(path)) {
+    LOG(INFO) << "[TvmVMModule] loading module at path: [" << path_ << "] on device ["
+              << (device_type == kDLCUDA ? "cuda:" : "cpu:") << device_id << "]...";
+    // build tvm graph runtime
+    std::string lib_path, code_path;
+    DecodePaths(path_, &lib_path, &code_path);
+    // load lib
+    auto loaded_lib = tvm::runtime::Module::LoadFromFile(lib_path, "so");
+    // load code
+    std::ifstream code_in(code_path);
+    std::string loaded_code((std::istreambuf_iterator<char>(code_in)),
+                            std::istreambuf_iterator<char>());
+    code_in.close();
+    exe_ = tvm::runtime::vm::Executable::Load(loaded_code, loaded_lib);
+    const auto runtime_create = *tvm::runtime::Registry::Get("runtime._VirtualMachine");
+    vm_ = runtime_create(exe_);
+    auto init_func = vm_.GetFunction("init", false);
+    auto alloc_type = static_cast<int>(tvm::runtime::vm::AllocatorType::kPooled);
+    if (device_type != kDLCPU) {
+      // CPU is required for executing shape functions
+      init_func(static_cast<int>(kDLCPU), 0, alloc_type, device_type, device_id, alloc_type);
+    } else {
+      init_func(device_type, device_id, alloc_type);
+    }
+    set_input = vm_.GetFunction("set_input", false);
+    invoke = vm_.GetFunction("invoke", false);
+  }
+
+  static constexpr char kPathDelimiter = '|';
+
+  /*!
+   * \brief Decode lib_path, code_path from encoded path.
+   *
+   * \param path The encoded path, concated with `kPathDelimiter`.
+   * \param lib_path The path of lib file.
+   * \param code_path The path of code file.
+   */
+  static void DecodePaths(const std::string& path, std::string* lib_path, std::string* code_path) {
+    std::vector<std::string> paths;
+    for (size_t i = 0, pre = 0, lim = path.size(); i <= lim; ++i) {
+      if (i == lim || path.at(i) == kPathDelimiter) {
+        paths.push_back(path.substr(pre, i - pre));
+        pre = i + 1;
+      }
+    }
+    CHECK_EQ(paths.size(), 2u);
+    *lib_path = paths.at(0);
+    *code_path = paths.at(1);
+  }
+
+  /*!
+   * \brief Encode lib_path, code_path by concat then with `kPathDelimiter`.
+   *
+   * \param lib_path The path of vm lib file.
+   * \param code_path The path of code.
+   *
+   * \return The encoded path, concated with `kPathDelimiter`.
+   */
+  static std::string EncodePaths(const std::string& lib_path, const std::string& code_path) {
+    return lib_path + kPathDelimiter + code_path;
+  }
+
+  const std::string& path() const { return path_; }
+
+  tvm::runtime::PackedFunc set_input;
+  tvm::runtime::PackedFunc invoke;
+
+ private:
+  tvm::runtime::Module exe_;
+  tvm::runtime::Module vm_;
+  std::string path_;
+};
+
+/*! \brief Pytorch custom class to call TVM */
+class BaseTvmClass : public torch::jit::CustomClassHolder {
+ public:
+  /*!
+   * \brief Constructor.
+   *
+   * \param num_inputs Number of inputs.
+   * \param num_outputs Number of outputs.
+   * \param device std::string, use the pytorch device str format, e.g. `cuda:0`, 'cpu'
+   */
+  BaseTvmClass(const int64_t num_inputs, const int64_t num_outputs, const std::string& device)
+      : num_inputs_(num_inputs), num_outputs_(num_outputs) {
+    auto torch_device = torch::Device(device);
+    device_type_ = torch_device.is_cuda() ? kDLCUDA : kDLCPU;
+    device_id_ = torch_device.index();
+  }
+
+  /*! \brief Virtual destructor. */
+  virtual ~BaseTvmClass() {}
+
+  /*!
+   * \brief Get repr string of pytorch input shapes.
+   *
+   * \param shapes Pytorch shapes of type List[List[int]].
+   *
+   * \return std::string, the representation of inputs shapes.
+   */
+  static std::string TvmShapeRepr(const c10::List<c10::List<int64_t>>& shapes) {
+    std::stringstream ss;
+    for (const auto& shape : shapes) {
+      for (const auto& sz : static_cast<c10::List<int64_t>>(shape)) {
+        ss << sz << "_";
+      }
+      ss << "__";
+    }
+    return ss.str();
+  }
+
+  /*!
+   * \brief Get input shapes.
+   *
+   * \param inputs Inputs with type List[Tensor].
+   *
+   * \return outputs with type List[List[int]].
+   */
+  static c10::List<c10::List<int64_t>> GetShapes(const c10::List<at::Tensor>& inputs) {
+    c10::List<c10::List<int64_t>> shapes;
+    for (const auto& input : inputs) {
+      c10::List<int64_t> shape;
+      for (const auto sz : static_cast<at::Tensor>(input).sizes()) {
+        shape.push_back(sz);
+      }
+      shapes.push_back(shape);
+    }
+    return shapes;
+  }
+
+  /*!
+   * \brief Move the TVM modules to given device.
+   *
+   * \param device String repr of the device to be moved to.
+   */
+  virtual void to(const std::string& device) = 0;
+
+  // getters
+  int64_t num_inputs() const { return num_inputs_; }
+
+  int64_t num_outputs() const { return num_outputs_; }
+
+  int64_t device_type() const { return device_type_; }
+
+  int64_t device_id() const { return device_id_; }
+
+  c10::DeviceType torch_device_type() const {
+    return device_type() == kDLCUDA ? torch::DeviceType::CUDA : torch::DeviceType::CPU;
+  }
+
+  bool is_on_same_device(const torch::Tensor& tensor) const {
+    auto tensor_device_type = tensor.device().type();
+    if (tensor_device_type == torch::DeviceType::CUDA) {
+      return tensor_device_type == torch_device_type() && device_id() == tensor.device().index();
+    }
+    CHECK_EQ(tensor_device_type, torch::DeviceType::CPU);
+    return tensor_device_type == torch_device_type();
+  }
+
+  std::string device() const { return torch::Device(torch_device_type(), device_id()).str(); }
+
+  /*!
+   * \brief Module forward.
+   *
+   * \param inputs Inputs with type List[Tensor].
+   *
+   * \return outputs with type List[Tensor].
+   */
+  virtual c10::List<at::Tensor> forward(const c10::List<at::Tensor>& inputs) = 0;
+
+  /*!
+   * \brief Serialize TVM Modules to Dict<string, string>
+   */
+  virtual c10::Dict<std::string, std::string> SerializeTvmModules() const = 0;
+
+  /*!
+   * \brief deserialize TVM Modules from Dict<string, string>
+   */
+  virtual void DeserializeTvmModules(const c10::Dict<std::string, std::string>& shape_path_map) = 0;
+
+ protected:
+  const int64_t num_inputs_;
+  const int64_t num_outputs_;
+  int64_t device_type_;
+  int64_t device_id_;
+};
+
+/*! \brief Pytorch custom class to call TVM graph runtime */
+class TvmGraphRuntimeClass : public BaseTvmClass {
+ public:
+  TvmGraphRuntimeClass(const int64_t num_inputs, const int64_t num_outputs,
+                       const std::string& device)
+      : BaseTvmClass(num_inputs, num_outputs, device) {}
+
+  /*!
+   * \brief Module forward.
+   *
+   * \param inputs Inputs with type List[Tensor].
+   *
+   * \return outputs with type List[Tensor].
+   */
+  c10::List<at::Tensor> forward(const c10::List<at::Tensor>& inputs) override {
+    CHECK_EQ(inputs.size(), num_inputs_);
+    auto shape_repr = TvmShapeRepr(GetShapes(inputs));
+    std::vector<DLTensor> args(num_inputs_ + num_outputs_);
+    auto iter = tvm_modules_.find(shape_repr);
+    CHECK(iter != tvm_modules_.end());
+    const auto& tvm_pack = iter->second;
+    std::vector<TensorAsBuf> buf_infos;
+    buf_infos.reserve(num_inputs_ + num_outputs_);
+
+    for (int i = 0; i < num_inputs_; ++i) {
+      at::Tensor inp = inputs[i];
+      CHECK(is_on_same_device(inp))
+          << "input #" << i
+          << " of forward is not on the same device with TvmGraphRuntime, expected " << device()
+          << " but got " << inp.device().str();
+      inp = inp.contiguous();
+      buf_infos.emplace_back(inp);
+      auto& input_buf = buf_infos[i];
+      input_buf.CopyFromOrigin();
+      input_buf.MakeDLTensor(&args[i]);
+      tvm_pack.set_input(i, &args[i]);

Review comment:
       Is copying always happening here? From the RFC discussion I was assuming that we can do both input and output zero copy, see https://github.com/apache/tvm-rfcs/pull/25#discussion_r697321721




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] Meteorix commented on pull request #8777: [PyTorch][WIP]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
Meteorix commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-905320199


   Update:
   * RFC: https://github.com/apache/tvm-rfcs/pull/25
   * discuss: https://discuss.tvm.apache.org/t/rfc-pytorchtvm-compile-torchscript-to-tvm-and-use-accelerated-module-in-pytorch/10873


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] kongroo commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
kongroo commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r740017992



##########
File path: python/tvm/contrib/torch/module.py
##########
@@ -0,0 +1,121 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Module container of PyTorch custom class"""
+from typing import List
+import torch
+
+
+class GraphModule(torch.nn.Module):
+    r"""Module container of Pytorch class which wraps exported
+    TVM op implementation library to be called on Pytorch side"""
+
+    @classmethod
+    def shape_repr(cls, input_shapes):
+        return torch.ops.tvm_dsoop.tvm_shape_repr(input_shapes)
+
+    def __init__(self, num_inputs, num_outputs, device=None):
+        super().__init__()
+        self.dummy_param = torch.nn.Parameter(torch.empty(0))
+        self.engine = None
+
+        if device is not None:
+            self.to(device)
+        self.engine = torch.classes.tvm_dsoop.TvmGraphModule(num_inputs, num_outputs, self.device)
+
+    def init(self, input_shapes, lib_path, graph_path, params_path):
+        r"""Load tvm module"""
+        self.engine.load_tvm_module(input_shapes, lib_path, graph_path, params_path)
+
+    def forward(self, inputs: List[torch.Tensor]):
+        r"""Call tvm module to forward"""
+        return self.engine.forward(inputs)
+
+    @property
+    def device(self):
+        r"""Get the device string"""
+        return str(self.dummy_param.device)
+
+    def _apply(self, func):
+        r"""Override to device function, manually move tvm module to desired device"""
+        super()._apply(func)
+        if self.engine is not None:
+            self.engine.to(self.device)
+        return self
+
+
+class VMModule(torch.nn.Module):
+    r"""Module container of Pytorch class which wraps exported
+    TVM op implementation library to be called on Pytorch side"""
+
+    @classmethod
+    def shape_repr(cls, input_shapes):
+        return torch.ops.tvm_dsoop.tvm_shape_repr(input_shapes)
+
+    def __init__(self, num_inputs, num_outputs, device=None):
+        super().__init__()
+        self.dummy_param = torch.nn.Parameter(torch.empty(0))
+        self.engine = None
+
+        if device is not None:
+            self.to(device)
+        self.engine = torch.classes.tvm_dsoop.TvmVMModule(num_inputs, num_outputs, self.device)
+
+    def init(self, input_shapes, lib_path, code_path):
+        r"""Load tvm module"""
+        self.engine.load_tvm_module(input_shapes, lib_path, code_path)
+
+    def forward(self, inputs: List[torch.Tensor]):
+        r"""Call tvm module to forward"""
+        return self.engine.forward(inputs)
+
+    @property
+    def device(self):
+        r"""Get the device string"""
+        return str(self.dummy_param.device)
+
+    def _apply(self, func):
+        r"""Override to device function, manually move tvm module to desired device"""
+        super()._apply(func)
+        if self.engine is not None:
+            self.engine.to(self.device)
+        return self
+
+
+class TraceTvmModule(torch.nn.Module):

Review comment:
       This just convert input and output of the module from `List[Tensor]` to tuple of Tensors.
   Added test case in `test_trace_tvm_module.py`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-958121208


   @kongroo Looks like you've hit an unfortunate flaky test error, please kick another job.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-957032634






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r740669880



##########
File path: python/tvm/contrib/torch/module.py
##########
@@ -0,0 +1,121 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Module container of PyTorch custom class"""
+from typing import List
+import torch
+
+
+class GraphModule(torch.nn.Module):
+    r"""Module container of Pytorch class which wraps exported
+    TVM op implementation library to be called on Pytorch side"""
+
+    @classmethod
+    def shape_repr(cls, input_shapes):
+        return torch.ops.tvm_dsoop.tvm_shape_repr(input_shapes)
+
+    def __init__(self, num_inputs, num_outputs, device=None):
+        super().__init__()
+        self.dummy_param = torch.nn.Parameter(torch.empty(0))
+        self.engine = None
+
+        if device is not None:
+            self.to(device)

Review comment:
       ok, this is the initial PR anyway, I think it is fine to assume that users are responsible for this.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac commented on pull request #8777: [PyTorch][WIP]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
comaniac commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-900845407


   It would be good for this scope of new feature to start with an RFC.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] junrushao1994 commented on pull request #8777: [PyTorch][WIP]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
junrushao1994 commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-900830923






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-924853002


   @Meteorix I recently learned about [TRTorch](https://github.com/NVIDIA/TRTorch). Is it reasonable to say that PyTorchTVM is similar to what TRTorch does with TensorRT + Torch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jroesch commented on pull request #8777: [PyTorch][WIP]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
jroesch commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-924636130


   @Meteorix any updates on when this will be ready for review? happy to help shepherd these changes and work with you to get them merged. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r713799020



##########
File path: apps/pt_class/tests/test_pt_compile.py
##########
@@ -0,0 +1,60 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Test script for tf op module"""
+import torch
+import time
+from torchvision.models import resnet50
+from tvm.contrib.pt_op import compile
+
+
+model = resnet50().half().cuda()
+x = torch.rand([1, 3, 244, 244]).half().cuda()

Review comment:
       224?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r713798294



##########
File path: apps/pt_class/prepare_and_test_pt_tvm_class.sh
##########
@@ -0,0 +1,35 @@
+#!/bin/bash
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+TVM_ROOT=$(cd $(dirname $0)/../..; pwd)
+echo "TVM_ROOT=${TVM_ROOT}"
+
+export PYTHONPATH=${TVM_ROOT}/python
+
+python3 -c "import tvm; print(tvm.runtime.enabled('gpu'))" | grep -e 1
+if [ "$?" -eq 0 ]; then
+    echo "Build PT_TVMCLASS with gpu support and execute tests"
+    CMAKE_OPTIONS="-DUSE_CUDA=/data00/liuxin.ai/cuda_111 -DPython3_EXECUTABLE=python3 -DTVM_ROOT=${TVM_ROOT}"

Review comment:
       Update /data00/liuxin.ai/cuda_111




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r740669880



##########
File path: python/tvm/contrib/torch/module.py
##########
@@ -0,0 +1,121 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Module container of PyTorch custom class"""
+from typing import List
+import torch
+
+
+class GraphModule(torch.nn.Module):
+    r"""Module container of Pytorch class which wraps exported
+    TVM op implementation library to be called on Pytorch side"""
+
+    @classmethod
+    def shape_repr(cls, input_shapes):
+        return torch.ops.tvm_dsoop.tvm_shape_repr(input_shapes)
+
+    def __init__(self, num_inputs, num_outputs, device=None):
+        super().__init__()
+        self.dummy_param = torch.nn.Parameter(torch.empty(0))
+        self.engine = None
+
+        if device is not None:
+            self.to(device)

Review comment:
       ok, this is the initial PR anyway, I think it is fine to assume that users are responsible for this.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] kongroo commented on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
kongroo commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-960420666


   > @kongroo Looks like you've hit an unfortunate flaky test error, please kick another job.
   
   Finally CI passed...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r735245631



##########
File path: cmake/modules/contrib/PT_TVMDSOOP.cmake
##########
@@ -0,0 +1,64 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+if(NOT USE_PT_TVMDSOOP STREQUAL "OFF")
+  find_package(Python3 COMPONENTS Interpreter Development)
+  include_directories(${Python3_INCLUDE_DIRS})
+
+  message(STATUS "Python3_INCLUDE_DIRS: ${Python3_INCLUDE_DIRS}")
+
+  execute_process(COMMAND ${Python3_EXECUTABLE} -c "import torch; print(torch.__path__[0].strip())"
+    OUTPUT_VARIABLE PT_PATH
+    RESULT_VARIABLE PT_STATUS)
+  if (NOT ${PT_STATUS} EQUAL 0)
+    message(FATAL_ERROR "Fail to get pytorch path")
+  endif()
+
+  string(REGEX REPLACE "\n" "" PT_PATH "${PT_PATH}")
+
+  set(PT_COMPILE_FLAGS_STR "-I${PT_PATH}/include -D_GLIBCXX_USE_CXX11_ABI=0")
+  set(PT_LINK_FLAGS_STR "-L${PT_PATH}/lib -l:libtorch.so -l:libtorch_python.so")
+
+  if(NOT USE_CUDA STREQUAL "OFF")
+    add_definitions(-DPT_TVMDSOOP_ENABLE_GPU)
+  endif()
+
+
+  string(REGEX REPLACE "\n" " " PT_FLAGS "${PT_COMPILE_FLAGS} ${PT_LINK_FLAGS}")
+  separate_arguments(PT_COMPILE_FLAGS UNIX_COMMAND ${PT_COMPILE_FLAGS_STR})
+  separate_arguments(PT_LINK_FLAGS UNIX_COMMAND ${PT_LINK_FLAGS_STR})
+
+
+  set(LIBRARY_NAME pt_tvmdsoop)
+  file(GLOB_RECURSE PTTVM_SRCS ${CMAKE_CURRENT_SOURCE_DIR}/src/contrib/torch/**/*.cc)
+  add_library(${LIBRARY_NAME} SHARED ${PTTVM_SRCS})
+  # add_library(${STATIC_NAME} STATIC ${PTTVM_SRCS})
+  # set(PTTVM_LINK_FLAGS -ltvm -ltvm_runtime -L${CMAKE_CURRENT_BINARY_DIR})
+  set(PTTVM_LINK_FLAGS -ltvm -L${CMAKE_CURRENT_BINARY_DIR})
+
+  if (NOT BUILD_PT_TVMDSOOP_ONLY STREQUAL "ON")
+    add_dependencies(${LIBRARY_NAME} tvm) 
+  endif()
+  # add_dependencies(${LIBRARY_NAME} tvm)
+
+  target_compile_options(${LIBRARY_NAME} PUBLIC ${PTTVM_COMPILE_FLAGS} ${PT_COMPILE_FLAGS})
+  target_link_libraries(${LIBRARY_NAME} PUBLIC ${PTTVM_LINK_FLAGS} ${PT_LINK_FLAGS})
+  # target_compile_options(${STATIC_NAME} PUBLIC ${PTTVM_COMPILE_FLAGS} ${PT_COMPILE_FLAGS})
+  # target_link_libraries(${STATIC_NAME} PUBLIC ${PTTVM_LINK_FLAGS} ${PT_LINK_FLAGS})

Review comment:
       Please remove commented lines.

##########
File path: python/tvm/contrib/torch/module.py
##########
@@ -0,0 +1,121 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Module container of PyTorch custom class"""
+from typing import List
+import torch
+
+
+class GraphModule(torch.nn.Module):
+    r"""Module container of Pytorch class which wraps exported
+    TVM op implementation library to be called on Pytorch side"""
+
+    @classmethod
+    def shape_repr(cls, input_shapes):
+        return torch.ops.tvm_dsoop.tvm_shape_repr(input_shapes)
+
+    def __init__(self, num_inputs, num_outputs, device=None):
+        super().__init__()
+        self.dummy_param = torch.nn.Parameter(torch.empty(0))
+        self.engine = None
+
+        if device is not None:
+            self.to(device)
+        self.engine = torch.classes.tvm_dsoop.TvmGraphModule(num_inputs, num_outputs, self.device)
+
+    def init(self, input_shapes, lib_path, graph_path, params_path):
+        r"""Load tvm module"""
+        self.engine.load_tvm_module(input_shapes, lib_path, graph_path, params_path)
+
+    def forward(self, inputs: List[torch.Tensor]):
+        r"""Call tvm module to forward"""
+        return self.engine.forward(inputs)
+
+    @property
+    def device(self):
+        r"""Get the device string"""
+        return str(self.dummy_param.device)
+
+    def _apply(self, func):
+        r"""Override to device function, manually move tvm module to desired device"""
+        super()._apply(func)
+        if self.engine is not None:
+            self.engine.to(self.device)
+        return self
+
+
+class VMModule(torch.nn.Module):

Review comment:
       It seems this class is not used anywhere. Please add a test for it or remove it.

##########
File path: python/tvm/contrib/torch/pytorch_tvm.py
##########
@@ -0,0 +1,226 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""`compile` api that convert torch module to torch tvm module"""
+import os
+import tvm
+import tvm.testing
+from tvm import relay, autotvm
+from tvm.runtime import load_module
+from tvm.autotvm.tuner import XGBTuner, GATuner, RandomTuner, GridSearchTuner
+from tvm.contrib import graph_executor
+from tvm.contrib.debugger import debug_executor
+from . import GraphModule
+
+
+def tune_tasks(
+    tasks,
+    measure_option,
+    tuner="xgb",
+    n_trial=1000,
+    early_stopping=None,
+    log_filename="tuning.log",
+    use_transfer_learning=True,
+):
+    """Tune tasks and generate tuning log to file"""
+    # create tmp log file
+    tmp_log_file = log_filename + ".tmp"
+    if os.path.exists(tmp_log_file):
+        os.remove(tmp_log_file)
+
+    for i, tsk in enumerate(reversed(tasks)):
+        prefix = f"[Task {i + 1:2d}/{len(tasks):2d}] "
+
+        # create tuner
+        if tuner in ("xgb", "sgb-rank"):
+            tuner_obj = XGBTuner(tsk, loss_type="rank")
+        elif tuner == "ga":
+            tuner_obj = GATuner(tsk, pop_size=100)
+        elif tuner == "random":
+            tuner_obj = RandomTuner(tsk)
+        elif tuner == "gridsearch":
+            tuner_obj = GridSearchTuner(tsk)
+        else:
+            raise ValueError("Invalid tuner: " + tuner)
+
+        if use_transfer_learning:
+            if os.path.isfile(tmp_log_file):
+                tuner_obj.load_history(autotvm.record.load_from_file(tmp_log_file))
+
+        # do tuning
+        tsk_trial = min(n_trial, len(tsk.config_space))
+        tuner_obj.tune(
+            n_trial=tsk_trial,
+            early_stopping=early_stopping,
+            measure_option=measure_option,
+            callbacks=[
+                autotvm.callback.progress_bar(tsk_trial, prefix=prefix),
+                autotvm.callback.log_to_file(tmp_log_file),
+            ],
+        )
+
+    # pick best records to a cache file
+    autotvm.record.pick_best(tmp_log_file, log_filename)
+    os.remove(tmp_log_file)
+
+
+def get_tuning_opt(log_file="tuning.log", n_trial=200):
+    """Returns tuning options"""
+    tuning_opt = {
+        "log_filename": log_file,
+        "tuner": "random",
+        "n_trial": n_trial,
+        "early_stopping": 60,
+        "measure_option": autotvm.measure_option(
+            builder=autotvm.LocalBuilder(timeout=10),
+            runner=autotvm.LocalRunner(number=20, repeat=3, timeout=4, min_repeat_ms=150),
+        ),
+    }
+    return tuning_opt
+
+
+TVM_ASSETS = ["mod.so", "graph.json", "params"]
+
+
+class PyTorchTVMModule:
+    """Helper class for compiling pytorch module to tvm module"""
+
+    def __init__(self) -> None:
+        self.script_module = None
+        self.input_infos = None
+        self.default_dtype = "float32"
+        self.mod = None
+        self.params = None
+        self.tasks = None
+        self.target = "cuda"
+        self.dev = tvm.cuda(0)

Review comment:
       target and device are hard coded. Is this ok? What happens if a user does `model.to("cpu")`.

##########
File path: python/tvm/contrib/torch/__init__.py
##########
@@ -0,0 +1,50 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Module container of Pytorch custom class"""
+import os
+import platform
+import torch
+from tvm._ffi import libinfo
+from tvm.relay.frontend import pytorch
+
+
+def _load_platform_specific_library(lib_name="libpt_tvmdsoop"):
+    system = platform.system()
+    if system == "Darwin":
+        lib_file_name = lib_name + ".dylib"
+    elif system == "Windows":
+        lib_file_name = lib_name + ".dll"
+    else:
+        lib_file_name = lib_name + ".so"
+    lib_path = libinfo.find_lib_path()[0]
+    lib_dir = os.path.dirname(lib_path)
+    lib_file_path = os.path.join(lib_dir, lib_file_name)
+    torch.classes.load_library(lib_file_path)
+
+
+_load_platform_specific_library()
+
+from . import module  # nopep8, pylint: disable=wrong-import-position
+
+GraphModule = module.GraphModule
+VMModule = module.VMModule
+TraceTvmModule = module.TraceTvmModule
+
+from . import pytorch_tvm  # nopep8, pylint: disable=wrong-import-position
+
+PyTorchTVMModule = pytorch_tvm.PyTorchTVMModule
+compile = pytorch_tvm.compile  # pylint: disable=redefined-builtin,invalid-name

Review comment:
       Better to put all pylint disable directives below the license (see other files) 

##########
File path: python/tvm/contrib/torch/pytorch_tvm.py
##########
@@ -0,0 +1,226 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""`compile` api that convert torch module to torch tvm module"""
+import os
+import tvm
+import tvm.testing
+from tvm import relay, autotvm
+from tvm.runtime import load_module
+from tvm.autotvm.tuner import XGBTuner, GATuner, RandomTuner, GridSearchTuner
+from tvm.contrib import graph_executor
+from tvm.contrib.debugger import debug_executor
+from . import GraphModule
+
+
+def tune_tasks(
+    tasks,
+    measure_option,
+    tuner="xgb",
+    n_trial=1000,
+    early_stopping=None,
+    log_filename="tuning.log",
+    use_transfer_learning=True,
+):
+    """Tune tasks and generate tuning log to file"""
+    # create tmp log file
+    tmp_log_file = log_filename + ".tmp"
+    if os.path.exists(tmp_log_file):
+        os.remove(tmp_log_file)
+
+    for i, tsk in enumerate(reversed(tasks)):
+        prefix = f"[Task {i + 1:2d}/{len(tasks):2d}] "
+
+        # create tuner
+        if tuner in ("xgb", "sgb-rank"):
+            tuner_obj = XGBTuner(tsk, loss_type="rank")
+        elif tuner == "ga":
+            tuner_obj = GATuner(tsk, pop_size=100)
+        elif tuner == "random":
+            tuner_obj = RandomTuner(tsk)
+        elif tuner == "gridsearch":
+            tuner_obj = GridSearchTuner(tsk)
+        else:
+            raise ValueError("Invalid tuner: " + tuner)
+
+        if use_transfer_learning:
+            if os.path.isfile(tmp_log_file):
+                tuner_obj.load_history(autotvm.record.load_from_file(tmp_log_file))
+
+        # do tuning
+        tsk_trial = min(n_trial, len(tsk.config_space))
+        tuner_obj.tune(
+            n_trial=tsk_trial,
+            early_stopping=early_stopping,
+            measure_option=measure_option,
+            callbacks=[
+                autotvm.callback.progress_bar(tsk_trial, prefix=prefix),
+                autotvm.callback.log_to_file(tmp_log_file),
+            ],
+        )
+
+    # pick best records to a cache file
+    autotvm.record.pick_best(tmp_log_file, log_filename)
+    os.remove(tmp_log_file)
+
+
+def get_tuning_opt(log_file="tuning.log", n_trial=200):
+    """Returns tuning options"""
+    tuning_opt = {
+        "log_filename": log_file,
+        "tuner": "random",
+        "n_trial": n_trial,
+        "early_stopping": 60,
+        "measure_option": autotvm.measure_option(
+            builder=autotvm.LocalBuilder(timeout=10),
+            runner=autotvm.LocalRunner(number=20, repeat=3, timeout=4, min_repeat_ms=150),
+        ),
+    }
+    return tuning_opt
+
+
+TVM_ASSETS = ["mod.so", "graph.json", "params"]
+
+
+class PyTorchTVMModule:
+    """Helper class for compiling pytorch module to tvm module"""
+
+    def __init__(self) -> None:
+        self.script_module = None
+        self.input_infos = None
+        self.default_dtype = "float32"
+        self.mod = None
+        self.params = None
+        self.tasks = None
+        self.target = "cuda"
+        self.dev = tvm.cuda(0)
+        self.log_file = None
+        self.tvm_module = None
+        self.tvm_graph = None
+        self.tvm_lib = None
+        self.tvm_params = None
+
+    def from_pytorch(self, script_module, input_infos, default_dtype="float32"):
+        self.script_module = script_module
+        self.input_infos = input_infos
+        self.default_dtype = default_dtype
+        self.mod, self.params = relay.frontend.from_pytorch(
+            script_module, input_infos, default_dtype=default_dtype
+        )
+
+    def tune_tvm(self, log_file="tuning.log", n_trial=200):
+        self.tasks = autotvm.task.extract_from_program(
+            self.mod["main"],
+            target=self.target,
+            params=self.params,
+        )
+        self.log_file = log_file
+        tuning_opt = get_tuning_opt(log_file, n_trial)
+        tune_tasks(self.tasks, **tuning_opt)
+
+    def build_tvm(self, export_dir, debug_runtime=False):
+        tvm_mod = self._build_tvm(debug_runtime)
+        self._export_tvm(export_dir)
+        return tvm_mod
+
+    def _build_tvm(self, debug_runtime=False):
+        # compile kernels with history best records
+        with autotvm.apply_history_best(self.log_file):
+            with tvm.transform.PassContext(opt_level=3):
+                self.tvm_graph, self.tvm_lib, self.tvm_params = relay.build(
+                    self.mod, target=self.target, params=self.params
+                )
+
+        if not debug_runtime:
+            self.tvm_module = graph_executor.create(self.tvm_graph, self.tvm_lib, device=self.dev)
+        else:
+            self.tvm_module = debug_executor.create(self.tvm_graph, self.tvm_lib, device=self.dev)
+        self.tvm_module.set_input(**self.tvm_params)
+        return self.tvm_module
+
+    def _export_tvm(self, export_dir):
+        if not os.path.isdir(export_dir):
+            os.makedirs(export_dir)
+        self.export_dir = export_dir
+        self.tvm_lib.export_library(os.path.join(export_dir, TVM_ASSETS[0]))
+        with open(os.path.join(export_dir, TVM_ASSETS[1]), "w", encoding="utf8") as fout:
+            fout.write(self.tvm_graph)
+        with open(os.path.join(export_dir, TVM_ASSETS[2]), "wb") as fout:
+            fout.write(relay.save_param_dict(self.tvm_params))
+
+    def load_tvm(self, export_dir):
+        """Load tvm module from export directory"""
+        self.export_dir = export_dir
+        self.tvm_lib = load_module(os.path.join(export_dir, TVM_ASSETS[0]))
+        with open(os.path.join(export_dir, TVM_ASSETS[1]), "r", encoding="utf8") as f:
+            self.tvm_graph = f.read()
+        with open(os.path.join(export_dir, TVM_ASSETS[2]), "rb") as f:
+            self.tvm_params = relay.load_param_dict(f.read())
+
+        self.tvm_module = graph_executor.create(self.tvm_graph, self.tvm_lib, device=self.dev)
+        self.tvm_module.set_input(**self.tvm_params)
+        return self.tvm_module
+
+    def build_pytorch_op(self, num_inputs, num_outputs, input_infos=None):
+        assert self.export_dir, "you must build_tvm or load_tvm before"
+        input_infos = input_infos or self.input_infos
+        assert input_infos
+        assert len(input_infos) == num_inputs
+        assets = [os.path.join(self.export_dir, i) for i in TVM_ASSETS]
+        input_shapes = [i[1] for i in input_infos]
+        mod = GraphModule(num_inputs=num_inputs, num_outputs=num_outputs).to(self.target)
+        mod.init(input_shapes, *assets)
+        return mod
+
+
+def compile(script_module, option):  # pylint: disable=redefined-builtin
+    """
+    option = {
+        "input_infos": [
+            ("x", (1, 3, 244, 244)),
+        ],
+        "default_dtype": "float16",
+        "export_dir": "pytorch_compiled",
+        "num_outputs": 1,
+        "tuning_n_trials": 20,  # set zero to skip tuning
+        "tuning_log_file": "tuning.log",
+    }
+    script_module = torch.jit.script(model)
+    pytorch_tvm_module = compile(script_module, option)
+    pytorch_tvm_module("model_tvm.pt")
+    """
+    mod = PyTorchTVMModule()
+    print("Converting...")
+    input_infos = option["input_infos"]
+    default_dtype = option.get("default_dtype", "float32")
+    export_dir = option.get("export_dir", "pytorch_compiled")
+    tuning_log_file = option.get("tuning_log_file", "tuning.log")
+    tuning_n_trials = option.get("tuning_n_trials", 20)
+    num_outputs = option.get("num_outputs", 1)
+

Review comment:
       I think it is worth adding an option to enable fp16 quantization and NHWC layout conversion on TVM side, to enable using Tensor cores.

##########
File path: python/tvm/contrib/torch/__init__.py
##########
@@ -0,0 +1,50 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Module container of Pytorch custom class"""
+import os
+import platform
+import torch
+from tvm._ffi import libinfo
+from tvm.relay.frontend import pytorch
+
+
+def _load_platform_specific_library(lib_name="libpt_tvmdsoop"):
+    system = platform.system()
+    if system == "Darwin":
+        lib_file_name = lib_name + ".dylib"
+    elif system == "Windows":
+        lib_file_name = lib_name + ".dll"

Review comment:
       From the cmake config file I'm assuming that you only support linux. 

##########
File path: python/tvm/contrib/torch/module.py
##########
@@ -0,0 +1,121 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Module container of PyTorch custom class"""
+from typing import List
+import torch
+
+
+class GraphModule(torch.nn.Module):
+    r"""Module container of Pytorch class which wraps exported
+    TVM op implementation library to be called on Pytorch side"""
+
+    @classmethod
+    def shape_repr(cls, input_shapes):
+        return torch.ops.tvm_dsoop.tvm_shape_repr(input_shapes)
+
+    def __init__(self, num_inputs, num_outputs, device=None):
+        super().__init__()
+        self.dummy_param = torch.nn.Parameter(torch.empty(0))
+        self.engine = None
+
+        if device is not None:
+            self.to(device)
+        self.engine = torch.classes.tvm_dsoop.TvmGraphModule(num_inputs, num_outputs, self.device)
+
+    def init(self, input_shapes, lib_path, graph_path, params_path):
+        r"""Load tvm module"""
+        self.engine.load_tvm_module(input_shapes, lib_path, graph_path, params_path)
+
+    def forward(self, inputs: List[torch.Tensor]):
+        r"""Call tvm module to forward"""
+        return self.engine.forward(inputs)
+
+    @property
+    def device(self):
+        r"""Get the device string"""
+        return str(self.dummy_param.device)
+
+    def _apply(self, func):
+        r"""Override to device function, manually move tvm module to desired device"""
+        super()._apply(func)
+        if self.engine is not None:
+            self.engine.to(self.device)
+        return self
+
+
+class VMModule(torch.nn.Module):
+    r"""Module container of Pytorch class which wraps exported
+    TVM op implementation library to be called on Pytorch side"""
+
+    @classmethod
+    def shape_repr(cls, input_shapes):
+        return torch.ops.tvm_dsoop.tvm_shape_repr(input_shapes)
+
+    def __init__(self, num_inputs, num_outputs, device=None):
+        super().__init__()
+        self.dummy_param = torch.nn.Parameter(torch.empty(0))
+        self.engine = None
+
+        if device is not None:
+            self.to(device)
+        self.engine = torch.classes.tvm_dsoop.TvmVMModule(num_inputs, num_outputs, self.device)
+
+    def init(self, input_shapes, lib_path, code_path):
+        r"""Load tvm module"""
+        self.engine.load_tvm_module(input_shapes, lib_path, code_path)
+
+    def forward(self, inputs: List[torch.Tensor]):
+        r"""Call tvm module to forward"""
+        return self.engine.forward(inputs)
+
+    @property
+    def device(self):
+        r"""Get the device string"""
+        return str(self.dummy_param.device)
+
+    def _apply(self, func):
+        r"""Override to device function, manually move tvm module to desired device"""
+        super()._apply(func)
+        if self.engine is not None:
+            self.engine.to(self.device)
+        return self
+
+
+class TraceTvmModule(torch.nn.Module):

Review comment:
       This is also not used anywhere

##########
File path: python/tvm/contrib/torch/module.py
##########
@@ -0,0 +1,121 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Module container of PyTorch custom class"""
+from typing import List
+import torch
+
+
+class GraphModule(torch.nn.Module):
+    r"""Module container of Pytorch class which wraps exported
+    TVM op implementation library to be called on Pytorch side"""
+
+    @classmethod
+    def shape_repr(cls, input_shapes):
+        return torch.ops.tvm_dsoop.tvm_shape_repr(input_shapes)
+
+    def __init__(self, num_inputs, num_outputs, device=None):
+        super().__init__()
+        self.dummy_param = torch.nn.Parameter(torch.empty(0))
+        self.engine = None
+
+        if device is not None:
+            self.to(device)

Review comment:
       Should we make sure that `device` param and the target the `.so` file is compiled for are consistent? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] kongroo commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
kongroo commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r740017079



##########
File path: python/tvm/contrib/torch/pytorch_tvm.py
##########
@@ -0,0 +1,226 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""`compile` api that convert torch module to torch tvm module"""
+import os
+import tvm
+import tvm.testing
+from tvm import relay, autotvm
+from tvm.runtime import load_module
+from tvm.autotvm.tuner import XGBTuner, GATuner, RandomTuner, GridSearchTuner
+from tvm.contrib import graph_executor
+from tvm.contrib.debugger import debug_executor
+from . import GraphModule
+
+
+def tune_tasks(
+    tasks,
+    measure_option,
+    tuner="xgb",
+    n_trial=1000,
+    early_stopping=None,
+    log_filename="tuning.log",
+    use_transfer_learning=True,
+):
+    """Tune tasks and generate tuning log to file"""
+    # create tmp log file
+    tmp_log_file = log_filename + ".tmp"
+    if os.path.exists(tmp_log_file):
+        os.remove(tmp_log_file)
+
+    for i, tsk in enumerate(reversed(tasks)):
+        prefix = f"[Task {i + 1:2d}/{len(tasks):2d}] "
+
+        # create tuner
+        if tuner in ("xgb", "sgb-rank"):
+            tuner_obj = XGBTuner(tsk, loss_type="rank")
+        elif tuner == "ga":
+            tuner_obj = GATuner(tsk, pop_size=100)
+        elif tuner == "random":
+            tuner_obj = RandomTuner(tsk)
+        elif tuner == "gridsearch":
+            tuner_obj = GridSearchTuner(tsk)
+        else:
+            raise ValueError("Invalid tuner: " + tuner)
+
+        if use_transfer_learning:
+            if os.path.isfile(tmp_log_file):
+                tuner_obj.load_history(autotvm.record.load_from_file(tmp_log_file))
+
+        # do tuning
+        tsk_trial = min(n_trial, len(tsk.config_space))
+        tuner_obj.tune(
+            n_trial=tsk_trial,
+            early_stopping=early_stopping,
+            measure_option=measure_option,
+            callbacks=[
+                autotvm.callback.progress_bar(tsk_trial, prefix=prefix),
+                autotvm.callback.log_to_file(tmp_log_file),
+            ],
+        )
+
+    # pick best records to a cache file
+    autotvm.record.pick_best(tmp_log_file, log_filename)
+    os.remove(tmp_log_file)
+
+
+def get_tuning_opt(log_file="tuning.log", n_trial=200):
+    """Returns tuning options"""
+    tuning_opt = {
+        "log_filename": log_file,
+        "tuner": "random",
+        "n_trial": n_trial,
+        "early_stopping": 60,
+        "measure_option": autotvm.measure_option(
+            builder=autotvm.LocalBuilder(timeout=10),
+            runner=autotvm.LocalRunner(number=20, repeat=3, timeout=4, min_repeat_ms=150),
+        ),
+    }
+    return tuning_opt
+
+
+TVM_ASSETS = ["mod.so", "graph.json", "params"]
+
+
+class PyTorchTVMModule:
+    """Helper class for compiling pytorch module to tvm module"""
+
+    def __init__(self) -> None:
+        self.script_module = None
+        self.input_infos = None
+        self.default_dtype = "float32"
+        self.mod = None
+        self.params = None
+        self.tasks = None
+        self.target = "cuda"
+        self.dev = tvm.cuda(0)
+        self.log_file = None
+        self.tvm_module = None
+        self.tvm_graph = None
+        self.tvm_lib = None
+        self.tvm_params = None
+
+    def from_pytorch(self, script_module, input_infos, default_dtype="float32"):
+        self.script_module = script_module
+        self.input_infos = input_infos
+        self.default_dtype = default_dtype
+        self.mod, self.params = relay.frontend.from_pytorch(
+            script_module, input_infos, default_dtype=default_dtype
+        )
+
+    def tune_tvm(self, log_file="tuning.log", n_trial=200):
+        self.tasks = autotvm.task.extract_from_program(
+            self.mod["main"],
+            target=self.target,
+            params=self.params,
+        )
+        self.log_file = log_file
+        tuning_opt = get_tuning_opt(log_file, n_trial)
+        tune_tasks(self.tasks, **tuning_opt)
+
+    def build_tvm(self, export_dir, debug_runtime=False):
+        tvm_mod = self._build_tvm(debug_runtime)
+        self._export_tvm(export_dir)
+        return tvm_mod
+
+    def _build_tvm(self, debug_runtime=False):
+        # compile kernels with history best records
+        with autotvm.apply_history_best(self.log_file):
+            with tvm.transform.PassContext(opt_level=3):
+                self.tvm_graph, self.tvm_lib, self.tvm_params = relay.build(
+                    self.mod, target=self.target, params=self.params
+                )
+
+        if not debug_runtime:
+            self.tvm_module = graph_executor.create(self.tvm_graph, self.tvm_lib, device=self.dev)
+        else:
+            self.tvm_module = debug_executor.create(self.tvm_graph, self.tvm_lib, device=self.dev)
+        self.tvm_module.set_input(**self.tvm_params)
+        return self.tvm_module
+
+    def _export_tvm(self, export_dir):
+        if not os.path.isdir(export_dir):
+            os.makedirs(export_dir)
+        self.export_dir = export_dir
+        self.tvm_lib.export_library(os.path.join(export_dir, TVM_ASSETS[0]))
+        with open(os.path.join(export_dir, TVM_ASSETS[1]), "w", encoding="utf8") as fout:
+            fout.write(self.tvm_graph)
+        with open(os.path.join(export_dir, TVM_ASSETS[2]), "wb") as fout:
+            fout.write(relay.save_param_dict(self.tvm_params))
+
+    def load_tvm(self, export_dir):
+        """Load tvm module from export directory"""
+        self.export_dir = export_dir
+        self.tvm_lib = load_module(os.path.join(export_dir, TVM_ASSETS[0]))
+        with open(os.path.join(export_dir, TVM_ASSETS[1]), "r", encoding="utf8") as f:
+            self.tvm_graph = f.read()
+        with open(os.path.join(export_dir, TVM_ASSETS[2]), "rb") as f:
+            self.tvm_params = relay.load_param_dict(f.read())
+
+        self.tvm_module = graph_executor.create(self.tvm_graph, self.tvm_lib, device=self.dev)
+        self.tvm_module.set_input(**self.tvm_params)
+        return self.tvm_module
+
+    def build_pytorch_op(self, num_inputs, num_outputs, input_infos=None):
+        assert self.export_dir, "you must build_tvm or load_tvm before"
+        input_infos = input_infos or self.input_infos
+        assert input_infos
+        assert len(input_infos) == num_inputs
+        assets = [os.path.join(self.export_dir, i) for i in TVM_ASSETS]
+        input_shapes = [i[1] for i in input_infos]
+        mod = GraphModule(num_inputs=num_inputs, num_outputs=num_outputs).to(self.target)
+        mod.init(input_shapes, *assets)
+        return mod
+
+
+def compile(script_module, option):  # pylint: disable=redefined-builtin
+    """
+    option = {
+        "input_infos": [
+            ("x", (1, 3, 244, 244)),
+        ],
+        "default_dtype": "float16",
+        "export_dir": "pytorch_compiled",
+        "num_outputs": 1,
+        "tuning_n_trials": 20,  # set zero to skip tuning
+        "tuning_log_file": "tuning.log",
+    }
+    script_module = torch.jit.script(model)
+    pytorch_tvm_module = compile(script_module, option)
+    pytorch_tvm_module("model_tvm.pt")
+    """
+    mod = PyTorchTVMModule()
+    print("Converting...")
+    input_infos = option["input_infos"]
+    default_dtype = option.get("default_dtype", "float32")
+    export_dir = option.get("export_dir", "pytorch_compiled")
+    tuning_log_file = option.get("tuning_log_file", "tuning.log")
+    tuning_n_trials = option.get("tuning_n_trials", 20)
+    num_outputs = option.get("num_outputs", 1)
+

Review comment:
       I think we can consider doing these in later PRs and let this PR just support basic functionality.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] kongroo commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
kongroo commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r740011447



##########
File path: cmake/modules/contrib/PT_TVMDSOOP.cmake
##########
@@ -0,0 +1,64 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+if(NOT USE_PT_TVMDSOOP STREQUAL "OFF")
+  find_package(Python3 COMPONENTS Interpreter Development)
+  include_directories(${Python3_INCLUDE_DIRS})
+
+  message(STATUS "Python3_INCLUDE_DIRS: ${Python3_INCLUDE_DIRS}")
+
+  execute_process(COMMAND ${Python3_EXECUTABLE} -c "import torch; print(torch.__path__[0].strip())"
+    OUTPUT_VARIABLE PT_PATH
+    RESULT_VARIABLE PT_STATUS)
+  if (NOT ${PT_STATUS} EQUAL 0)
+    message(FATAL_ERROR "Fail to get pytorch path")
+  endif()
+
+  string(REGEX REPLACE "\n" "" PT_PATH "${PT_PATH}")
+
+  set(PT_COMPILE_FLAGS_STR "-I${PT_PATH}/include -D_GLIBCXX_USE_CXX11_ABI=0")
+  set(PT_LINK_FLAGS_STR "-L${PT_PATH}/lib -l:libtorch.so -l:libtorch_python.so")
+
+  if(NOT USE_CUDA STREQUAL "OFF")
+    add_definitions(-DPT_TVMDSOOP_ENABLE_GPU)
+  endif()
+
+
+  string(REGEX REPLACE "\n" " " PT_FLAGS "${PT_COMPILE_FLAGS} ${PT_LINK_FLAGS}")
+  separate_arguments(PT_COMPILE_FLAGS UNIX_COMMAND ${PT_COMPILE_FLAGS_STR})
+  separate_arguments(PT_LINK_FLAGS UNIX_COMMAND ${PT_LINK_FLAGS_STR})
+
+
+  set(LIBRARY_NAME pt_tvmdsoop)
+  file(GLOB_RECURSE PTTVM_SRCS ${CMAKE_CURRENT_SOURCE_DIR}/src/contrib/torch/**/*.cc)
+  add_library(${LIBRARY_NAME} SHARED ${PTTVM_SRCS})
+  # add_library(${STATIC_NAME} STATIC ${PTTVM_SRCS})
+  # set(PTTVM_LINK_FLAGS -ltvm -ltvm_runtime -L${CMAKE_CURRENT_BINARY_DIR})
+  set(PTTVM_LINK_FLAGS -ltvm -L${CMAKE_CURRENT_BINARY_DIR})
+
+  if (NOT BUILD_PT_TVMDSOOP_ONLY STREQUAL "ON")
+    add_dependencies(${LIBRARY_NAME} tvm) 
+  endif()
+  # add_dependencies(${LIBRARY_NAME} tvm)
+
+  target_compile_options(${LIBRARY_NAME} PUBLIC ${PTTVM_COMPILE_FLAGS} ${PT_COMPILE_FLAGS})
+  target_link_libraries(${LIBRARY_NAME} PUBLIC ${PTTVM_LINK_FLAGS} ${PT_LINK_FLAGS})
+  # target_compile_options(${STATIC_NAME} PUBLIC ${PTTVM_COMPILE_FLAGS} ${PT_COMPILE_FLAGS})
+  # target_link_libraries(${STATIC_NAME} PUBLIC ${PTTVM_LINK_FLAGS} ${PT_LINK_FLAGS})

Review comment:
       resolved




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-957032634


   @kongroo Please make sure to pass the CI.
   
   @jroesch @junrushao1994 @tqchen @comaniac Please take a look if you want to review. Otherwise I'm going to merge this week, I think we can bring this to the v0.8 release.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-962266735


   Thanks @Meteorix @kongroo this is merged!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi merged pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #8777:
URL: https://github.com/apache/tvm/pull/8777


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] kongroo commented on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
kongroo commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-927293667


   > Coming late to the discussion, thanks for the great work. It would be nice to discuss the naming a bit. Given we are moving towards a serious first class PT support. Some namespace ideas:
   > 
   > * tvm.contrib.pt_op
   > * tvm.contrib.pytorch
   > * tvm.contrib.torch
   
   I'm in favor of tvm.contrib.torch. pt_op is not accurate because It's actually implemented by custom class, not custom op. And torch is shorter than pytorch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] junrushao1994 commented on pull request #8777: [PyTorch][WIP]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
junrushao1994 commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-924637719


   @jroesch we just had long discussion in the RFC and the RFC was merged weeks ago. @Meteorix was a bit little in the recent weeks but will follow up soon


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] junrushao1994 commented on pull request #8777: [PyTorch][WIP]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
junrushao1994 commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-900852645


   Yeah it is super exciting feature for TVM, so would get more visibility in the community if we have like a RFC


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac commented on pull request #8777: [PyTorch][WIP]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
comaniac commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-900845407


   It would be good for this scope of new feature to start with an RFC.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jroesch commented on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
jroesch commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-926192956


   > > @Meteorix any updates on when this will be ready for review? happy to help shepherd these changes and work with you to get them merged.
   > 
   > Yes, it's ready for review!
   
   Great, I will take a pass this weekend and we can maybe aim for version one next week sometime?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] kongroo commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
kongroo commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r740017281



##########
File path: python/tvm/contrib/torch/module.py
##########
@@ -0,0 +1,121 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Module container of PyTorch custom class"""
+from typing import List
+import torch
+
+
+class GraphModule(torch.nn.Module):
+    r"""Module container of Pytorch class which wraps exported
+    TVM op implementation library to be called on Pytorch side"""
+
+    @classmethod
+    def shape_repr(cls, input_shapes):
+        return torch.ops.tvm_dsoop.tvm_shape_repr(input_shapes)
+
+    def __init__(self, num_inputs, num_outputs, device=None):
+        super().__init__()
+        self.dummy_param = torch.nn.Parameter(torch.empty(0))
+        self.engine = None
+
+        if device is not None:
+            self.to(device)
+        self.engine = torch.classes.tvm_dsoop.TvmGraphModule(num_inputs, num_outputs, self.device)
+
+    def init(self, input_shapes, lib_path, graph_path, params_path):
+        r"""Load tvm module"""
+        self.engine.load_tvm_module(input_shapes, lib_path, graph_path, params_path)
+
+    def forward(self, inputs: List[torch.Tensor]):
+        r"""Call tvm module to forward"""
+        return self.engine.forward(inputs)
+
+    @property
+    def device(self):
+        r"""Get the device string"""
+        return str(self.dummy_param.device)
+
+    def _apply(self, func):
+        r"""Override to device function, manually move tvm module to desired device"""
+        super()._apply(func)
+        if self.engine is not None:
+            self.engine.to(self.device)
+        return self
+
+
+class VMModule(torch.nn.Module):

Review comment:
       added test case in `test_torch_vm_module.py`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] kongroo commented on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
kongroo commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-938528754


   I've fixed some namespace and style issues.  Could you please help review this PR? @junrushao1994 @jcf94 @msakai @jroesch 
   
   And I have some questions to discuss:
   1. The `forward` function is not thread-safe. Should we use a mutex to make it thread-safe?
   2. We load the tvm module from files (mod.so, graph.json, params). But if we pass the relative path of the .so file, it may cause unexpected results. Consider this case: we have `export_dir1/mod.so` and `export_dir2/mod.so`,  chdir into `export_dir1` and load `./mod.so`, then chdir into `export_dir2` and try to load `./mod.so`, but `export_dir2/mod.so` will not be loaded! One possible solution is to translate the filepath to absolute path before `dlopen` in `src/runtime/dso_library.cc`. What's your opinion?
   3. We store tvm graph modules in a map `tvm_modules_` and use input tensors' shapes as the key. But this requires all the input tensors to have a fixed shape. In order to support dynamic shapes, we may need to iterate all the keys of `tvm_modules_` to find a matched one. Is it necessary to support dynamic shapes? If it is, how can we do it efficiently?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r713799184



##########
File path: apps/pt_class/tests/test_pt_compile.py
##########
@@ -0,0 +1,60 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Test script for tf op module"""
+import torch
+import time
+from torchvision.models import resnet50
+from tvm.contrib.pt_op import compile
+
+
+model = resnet50().half().cuda()
+x = torch.rand([1, 3, 244, 244]).half().cuda()
+model_jit = torch.jit.trace(model, x)
+print(model_jit.graph)
+
+print("run torchscript...")
+for i in range(20):
+    t = time.time()
+    model_jit(x)
+    torch.cuda.synchronize()
+    print(time.time() - t)
+
+
+option = {
+    "input_infos": [
+        ("x", (1, 3, 244, 244)),

Review comment:
       224?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r713852017



##########
File path: apps/pt_class/tests/test_pt_graph_module.py
##########
@@ -0,0 +1,123 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Test script for tf op module"""

Review comment:
       pt




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jcf94 commented on pull request #8777: [PyTorch][WIP]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
jcf94 commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-905338556


   Looks great!
   We have ever implemented a similar approach to put a optimized tvm graph runtime back to TensorFlow and Pytorch through custom ops. For some reason we're not able to split those codes out from another project. Glad to have this feature in main branch!
   
   cc @minminsun 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] Meteorix commented on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
Meteorix commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-924889926


   > @Meteorix I recently learned about [TRTorch](https://github.com/NVIDIA/TRTorch). Is it reasonable to say that PyTorchTVM is similar to what TRTorch does with TensorRT + Torch?
   
   Yes, actually the proposed frontend API is inspired by TRTorch


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] kongroo commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
kongroo commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r740014236



##########
File path: python/tvm/contrib/torch/module.py
##########
@@ -0,0 +1,121 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Module container of PyTorch custom class"""
+from typing import List
+import torch
+
+
+class GraphModule(torch.nn.Module):
+    r"""Module container of Pytorch class which wraps exported
+    TVM op implementation library to be called on Pytorch side"""
+
+    @classmethod
+    def shape_repr(cls, input_shapes):
+        return torch.ops.tvm_dsoop.tvm_shape_repr(input_shapes)
+
+    def __init__(self, num_inputs, num_outputs, device=None):
+        super().__init__()
+        self.dummy_param = torch.nn.Parameter(torch.empty(0))
+        self.engine = None
+
+        if device is not None:
+            self.to(device)

Review comment:
       I don't know how to find the device type that the .so file is compiled for. And I'm not sure if that is possible. Maybe it is OK to just let the user do the check?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#issuecomment-957032634


   @kongroo Please make sure to pass the CI.
   
   @jroesch @junrushao1994 @tqchen @comaniac Please take a look if you want to review. Otherwise I'm going to merge this week, I think we can bring this to the v0.8 release.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] kongroo commented on a change in pull request #8777: [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op

Posted by GitBox <gi...@apache.org>.
kongroo commented on a change in pull request #8777:
URL: https://github.com/apache/tvm/pull/8777#discussion_r735301129



##########
File path: cmake/modules/contrib/PT_TVMDSOOP.cmake
##########
@@ -0,0 +1,64 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+if(NOT USE_PT_TVMDSOOP STREQUAL "OFF")
+  find_package(Python3 COMPONENTS Interpreter Development)
+  include_directories(${Python3_INCLUDE_DIRS})
+
+  message(STATUS "Python3_INCLUDE_DIRS: ${Python3_INCLUDE_DIRS}")
+
+  execute_process(COMMAND ${Python3_EXECUTABLE} -c "import torch; print(torch.__path__[0].strip())"
+    OUTPUT_VARIABLE PT_PATH
+    RESULT_VARIABLE PT_STATUS)
+  if (NOT ${PT_STATUS} EQUAL 0)
+    message(FATAL_ERROR "Fail to get pytorch path")
+  endif()
+
+  string(REGEX REPLACE "\n" "" PT_PATH "${PT_PATH}")
+
+  set(PT_COMPILE_FLAGS_STR "-I${PT_PATH}/include -D_GLIBCXX_USE_CXX11_ABI=0")
+  set(PT_LINK_FLAGS_STR "-L${PT_PATH}/lib -l:libtorch.so -l:libtorch_python.so")
+
+  if(NOT USE_CUDA STREQUAL "OFF")
+    add_definitions(-DPT_TVMDSOOP_ENABLE_GPU)
+  endif()
+
+
+  string(REGEX REPLACE "\n" " " PT_FLAGS "${PT_COMPILE_FLAGS} ${PT_LINK_FLAGS}")
+  separate_arguments(PT_COMPILE_FLAGS UNIX_COMMAND ${PT_COMPILE_FLAGS_STR})
+  separate_arguments(PT_LINK_FLAGS UNIX_COMMAND ${PT_LINK_FLAGS_STR})
+
+
+  set(LIBRARY_NAME pt_tvmdsoop)
+  file(GLOB_RECURSE PTTVM_SRCS ${CMAKE_CURRENT_SOURCE_DIR}/src/contrib/torch/**/*.cc)
+  add_library(${LIBRARY_NAME} SHARED ${PTTVM_SRCS})
+  # add_library(${STATIC_NAME} STATIC ${PTTVM_SRCS})
+  # set(PTTVM_LINK_FLAGS -ltvm -ltvm_runtime -L${CMAKE_CURRENT_BINARY_DIR})
+  set(PTTVM_LINK_FLAGS -ltvm -L${CMAKE_CURRENT_BINARY_DIR})
+
+  if (NOT BUILD_PT_TVMDSOOP_ONLY STREQUAL "ON")
+    add_dependencies(${LIBRARY_NAME} tvm) 
+  endif()
+  # add_dependencies(${LIBRARY_NAME} tvm)
+
+  target_compile_options(${LIBRARY_NAME} PUBLIC ${PTTVM_COMPILE_FLAGS} ${PT_COMPILE_FLAGS})
+  target_link_libraries(${LIBRARY_NAME} PUBLIC ${PTTVM_LINK_FLAGS} ${PT_LINK_FLAGS})
+  # target_compile_options(${STATIC_NAME} PUBLIC ${PTTVM_COMPILE_FLAGS} ${PT_COMPILE_FLAGS})
+  # target_link_libraries(${STATIC_NAME} PUBLIC ${PTTVM_LINK_FLAGS} ${PT_LINK_FLAGS})

Review comment:
       Thanks a lot for reviewing. I'll update the code soon.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org