You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/02/18 04:54:06 UTC

[GitHub] [tvm] jroesch commented on a change in pull request #7085: [WIP] Fixes for using Python APIs from Rust.

jroesch commented on a change in pull request #7085:
URL: https://github.com/apache/tvm/pull/7085#discussion_r578132071



##########
File path: rust/tvm/README.md
##########
@@ -15,221 +15,40 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-# TVM Runtime Frontend Support
+# TVM
 
-This crate provides an idiomatic Rust API for [TVM](https://github.com/apache/tvm) runtime frontend. Currently this requires **Nightly Rust** and tested on `rustc 1.32.0-nightly`
+This crate provides an idiomatic Rust API for [TVM](https://github.com/apache/tvm).
+The code works on **Stable Rust** and is tested against `rustc 1.47`.
 
-## What Does This Crate Offer?
-
-Here is a major workflow
-
-1. Train your **Deep Learning** model using any major framework such as [PyTorch](https://pytorch.org/), [Apache MXNet](https://mxnet.apache.org/) or [TensorFlow](https://www.tensorflow.org/)
-2. Use **TVM** to build optimized model artifacts on a supported context such as CPU, GPU, OpenCL and specialized accelerators.
-3. Deploy your models using **Rust** :heart:
-
-### Example: Deploy Image Classification from Pretrained Resnet18 on ImageNet1k
-
-Please checkout [examples/resnet](examples/resnet) for the complete end-to-end example.
-
-Here's a Python snippet for downloading and building a pretrained Resnet18 via Apache MXNet and TVM
-
-```python
-block = get_model('resnet18_v1', pretrained=True)
-
-sym, params = relay.frontend.from_mxnet(block, shape_dict)
-# compile the model
-with relay.build_config(opt_level=opt_level):
-    graph, lib, params = relay.build(
-        net, target, params=params)
-# same the model artifacts
-lib.save(os.path.join(target_dir, "deploy_lib.o"))
-cc.create_shared(os.path.join(target_dir, "deploy_lib.so"),
-                [os.path.join(target_dir, "deploy_lib.o")])
-
-with open(os.path.join(target_dir, "deploy_graph.json"), "w") as fo:
-    fo.write(graph.json())
-with open(os.path.join(target_dir,"deploy_param.params"), "wb") as fo:
-    fo.write(relay.save_param_dict(params))
-```
+You can find the API Documentation [here](https://tvm.apache.org/docs/api/rust/tvm/index.html).
 
-Now, we need to input the artifacts to create and run the *Graph Runtime* to detect our input cat image
-
-![cat](https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true)
+## What Does This Crate Offer?
 
-as demostrated in the following Rust snippet
+The goal of this crate is to provide bindings to both the TVM compiler and runtime
+APIs. First train your **Deep Learning** model using any major framework such as
+[PyTorch](https://pytorch.org/), [Apache MXNet](https://mxnet.apache.org/) or [TensorFlow](https://www.tensorflow.org/).
+Then use **TVM** to build and deploy optimized model artifacts on a supported devices such as CPU, GPU, OpenCL and specialized accelerators.
 
-```rust
-    let graph = fs::read_to_string("deploy_graph.json")?;
-    // load the built module
-    let lib = Module::load(&Path::new("deploy_lib.so"))?;
-    // get the global TVM graph runtime function
-    let runtime_create_fn = Function::get("tvm.graph_runtime.create", true).unwrap();
-    let runtime_create_fn_ret = call_packed!(
-        runtime_create_fn,
-        &graph,
-        &lib,
-        &ctx.device_type,
-        &ctx.device_id
-    )?;
-    // get graph runtime module
-    let graph_runtime_module: Module = runtime_create_fn_ret.try_into()?;
-    // get the registered `load_params` from runtime module
-    let ref load_param_fn = graph_runtime_module
-        .get_function("load_params", false)
-        .unwrap();
-    // parse parameters and convert to TVMByteArray
-    let params: Vec<u8> = fs::read("deploy_param.params")?;
-    let barr = TVMByteArray::from(&params);
-    // load the parameters
-    call_packed!(load_param_fn, &barr)?;
-    // get the set_input function
-    let ref set_input_fn = graph_runtime_module
-        .get_function("set_input", false)
-        .unwrap();
+The Rust bindings are composed of a few crates:
+- The [tvm](https://tvm.apache.org/docs/api/rust/tvm/index.html) crate which exposes Rust bindings to
+  both the compiler and runtime.
+- The [tvm_macros](https://tvm.apache.org/docs/api/rust/tvm/index.html) crate which provides macros
+  which generate unsafe boilerplate for TVM's data structures.
+- The [tvm_rt](https://tvm.apache.org/docs/api/rust/tvm_rt/index.html) crate which exposes Rust

Review comment:
       TVM generates two dynamic libraries when we build it, the runtime bindings can be used to link to just the TVM runtime bindings, while the full tvm package links both the runtime and the compiler. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org