You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/01/06 04:29:08 UTC

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4564: [Doc] Introduction to module serialization

FrozenGene commented on a change in pull request #4564: [Doc] Introduction to module serialization
URL: https://github.com/apache/incubator-tvm/pull/4564#discussion_r363155024
 
 

 ##########
 File path: docs/dev/introduction_to_module_serialization.rst
 ##########
 @@ -0,0 +1,211 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+..    http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+Introduction to Module Serialization
+====================================
+
+When to deploy TVM runtime module, no matter it is CPU or GPU, TVM only needs one single DLL.
+The key is our unified module serialization mechanism. This document will introduce TVM module
+serialization format standard and implementation details.
+
+***************************************
+Module Export Example
+***************************************
+
+Let us build one resnet18 workload for GPU as our example firstly.
+
+.. code:: python
+
+   from tvm import relay
+   from tvm.relay import testing
+   from tvm.contrib import util
+   import tvm
+
+   # Resnet18 workload
+   resnet18_mod, resnet18_params = relay.testing.resnet.get_workload(num_layers=18)
+
+   # build
+   with relay.build_config(opt_level=3):
+       _, resnet18_lib, _ = relay.build_module.build(resnet18_mod, "cuda", params=resnet18_params)
+
+   # create one tempory directory
+   temp = util.tempdir()
+
+   # path lib
+   file_name = "deploy.so"
+   path_lib = temp.relpath(file_name)
+
+   # export library
+   resnet18_lib.export_library(path_lib)
+
+   # load it back
+   loaded_lib = tvm.module.load(path_lib)
+   assert loaded_lib.type_key == "library"
+   assert loaded_lib.imported_modules[0].type_key == "cuda"
+
+
+**************
+Serialization
+**************
+
+The entrance API is ``export_library`` of ``tvm.module.Module``.
+Inside this function, we will do the following steps:
+
+1. Collect all DSO modules (LLVM module or C module)
+
+
+2. If we have DSO modules, we will call ``save`` function to save them into files.
+
+
+3. Next, we will check whether we have imported modules. Like CUDA,
+   OpenCL or anything else, we don't restrict the module type here.
+   If we have imported modules, we will create one file named as ``dev.cc``
+   (so that we could compile into one dll), then call one function
 
 Review comment:
   The better description maybe should be (so that we could embed the binary blob data of import modules into one dll). For example, CUDA module, we will have one function named as SaveToBinary, we will use it to serialize related data of CUDA module (like function information table, binary data and so on) into binary. Then we will translate the binary blob data in C program `const char __tvm_dev_blob[] = {0x...0x...0x...}`, finally we will write this into dev.cc, which will be compiled into shared library by compiler (using -fPIC -shared).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services