You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/01/21 18:10:58 UTC

[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4718: [Docs] Bring Your Own Codegen Guide -- Part 2

tqchen commented on a change in pull request #4718: [Docs] Bring Your Own Codegen Guide -- Part 2
URL: https://github.com/apache/incubator-tvm/pull/4718#discussion_r369160146
 
 

 ##########
 File path: docs/dev/relay_bring_your_own_codegen.rst
 ##########
 @@ -516,10 +516,417 @@ So that users can configure whether to include your compiler when configuring TV
 Implement a Codegen for Your Representation
 *******************************************
 
-Although we have demonstrated how to implement a C codegen, your hardware may require other forms of graph representation, such as JSON. In this case, you can slightly modify ``CodegenC`` class we have implemented to generate your own graph representation, and implement a customized runtime module to let TVM runtime know how this graph representation should be executed. **(TBA)**
+Although we have demonstrated how to implement a C codegen, your hardware may require other forms of graph representation, such as JSON. In this case, you could modify ``CodegenC`` class we have implemented to generate your own graph representation and implement a customized runtime module to let TVM runtime know how this graph representation should be executed.
 
-Implement CodegenJSON
-=====================
+To simplify, we define a graph representation named "ExampleJSON" in this guide. ExampleJSON does not mean the real JSON but just a simple representation for graphs without a control flow. For example, assuming we have the following subgraph named ``subgraph_0``:
+
+::
+
+         input0
+           |
+          add <-- input1
+           |
+        subtract <-- input2
+           |
+        multiply <-- input3
+           |
+          out
+
+Then the ExampleJON of this subgraph looks like:
+
+.. code-block:: json
+
+  subgraph_0
+    input 0 10 10
+    input 1 10 10
+    input 2 10 10
+    input 3 10 10    
+    add 4 inputs: 0 1 shape: 10 10
+    sub 5 inputs: 4 2 shape: 10 10
+    add 6 inputs: 5 3 shape: 10 10
+
+The ``input`` keyword declares an input tensor with its ID and shape; while the other statements describes computations in ``<op> <output ID> inputs: [input ID] shape: [shape]`` syntax.
+
+In this section, our goal is to implement the following customized TVM runtime module to execute ExampleJSON graphs.
+
+.. code-block:: c++
+
+  runtime::Module ExampleJsonCompiler(const NodeRef& ref) {
+      ExampleJsonCodeGen codegen(ref);
+      std::string code = codegen.gen(); // Note 1
+      const auto* pf = runtime::Registry::Get("module.examplejson_module_create"); // Note 2
+      CHECK(pf != nullptr) << "Cannot find ExampleJson module to create the external runtime module";
+      return (*pf)(code);
+  }
+  TVM_REGISTER_GLOBAL("relay.ext.examplejsoncompiler").set_body_typed(ExampleJsonCompiler);
+
+**Note 1**: We will implement a customized codegen later to generate a ExampleJSON code string by taking a subgraph.
+
+**Note 2**: This line obtains a pointer to a function for creating the customized runtime module. You can see that it takes subgraph code in ExampleJSON format we just generated and initializes a runtime module.
+
+In the following sections, we are going to introduce 1) how to implement ``ExampleJsonCodeGen`` and 2) how to implement and register ``examplejson_module_create``.
+
+Implement ExampleJsonCodeGen
+============================
+
+Similar to the C codegen, we also derive ``ExampleJsonCodeGen`` from ``ExprVisitor`` to make use of visitor patterns for subgraph traversing. On the other hand, we do not have to inherit ``CodegenCBase`` because we do not need TVM C++ wrappers. The codegen class is implemented as follows:
+
+.. code-block:: c++
+
+    #include <tvm/relay/expr_functor.h>
+    #include <tvm/relay/transform.h>
+    #include <tvm/relay/type.h>
+    #include <tvm/runtime/module.h>
+    #include <tvm/runtime/object.h>
+
+    #include <fstream>
+    #include <sstream>
+
+    namespace tvm {
+    namespace relay {
+    namespace contrib {
+
+    class ExampleJsonCodeGen : public ExprVisitor {
+      public:
+        explicit ExampleJsonCodeGen();
+
+        // Note 1
+        void VisitExpr_(const VarNode* node) { /* Skip in this example. */ }
+        void VisitExpr_(const CallNode* call) final { /* Skip in this example. */ }
+
+        // Note 2
+        std::string gen(NodeRef& ref) {
+            this->code = "";
+            if (ref->IsInstance<FunctionNode>()) {
+                this->visit(Downcast<Function>(ref));
+            } else if (ref->IsInstance<relay::ModuleNode>()) {
+                relay::Module mod = Downcast<relay::Module>(ref);
+                for (const auto& it : mod->functions) {
+                    this->visit(Downcast<Function>(it.second));
+                }
+            } else {
+                LOG(FATAL) << "The input ref is expected to be a Relay function or module";
+            }
+            return this->code;
+        }
+
+      private:
+          /*! \brief The function id that represents a C source function. */
+         std::string code;
+    }
+
+**Note 1**: We again implement corresponding visitor functions to generate ExampleJSON code and store it to a class variable ``code`` (we skip the visitor function implementation in this example as their concepts are basically the same as C codegen). After finished the graph visiting, we should have an ExampleJSON graph in ``code``.
+
+**Note 2**: We define an internal API ``gen`` to take a subgraph and generate a ExampleJSON code. This API can be in an arbitrary name you prefer.
+
+The next step is to implement a customized runtime to make use of the output of ``ExampleJsonCodeGen``.
+
+Implement a Customized Runtime
+==============================
+
+In this section, we will implement a customized TVM runtime step-by-step and register it to TVM runtime modules. The customized runtime should be located at ``src/runtime/contrib/<your-runtime-name>/``. In our example, we name our runtime "example_ext_runtime" and put it under `here<src/runtime/contrib/example_ext_runtime/example_ext_runtime.cc>`_. Feel free to check this file for a complete implementation.
+
+Again, we first define a customized runtime class as follows. The class has to be derived from TVM ``ModuleNode`` in order to be compatible with other TVM runtime modules.
+
+.. code-block:: c++
+
+	#include <dmlc/logging.h>
+	#include <tvm/runtime/c_runtime_api.h>
+	#include <tvm/runtime/memory.h>
+	#include <tvm/runtime/module.h>
+	#include <tvm/runtime/ndarray.h>
+	#include <tvm/runtime/object.h>
+	#include <tvm/runtime/packed_func.h>
+	#include <tvm/runtime/registry.h>
+
+	#include <fstream>
+	#include <cmath>
+	#include <map>
+	#include <sstream>
+	#include <string>
+	#include <vector>
+
+	namespace tvm {
+	namespace runtime {
+	class ExampleJsonModule : public ModuleNode {
+	 public:
+	  explicit ExampleJsonModule(std::string graph_json);
+
+	  PackedFunc GetFunction(const std::string& name,
+	                         const ObjectPtr<Object>& sptr_to_self) final;
+
+	  const char* type_key() const { return "examplejson"; }
+
+	  void SaveToBinary(dmlc::Stream* stream) final;
+
+	  static Module LoadFromBinary(void* strm);
+
+	  static Module Create(const std::string& path);
+
+          std::string GetSource(const std::string& format = "");
+
+          void Run(int id, const std::vector<int>& inputs, int output);
+
+          void ParseJson(const std::string& json);
+
+	 private:
+	  /* \brief The json string that represents a computational graph. */
+	  std::string graph_json_;
+	  /* \brief The subgraph that being processed. */
+	  std::string curr_subgraph_;
+	  /*! \brief A simple graph from subgraph id to node entries. */
+	  std::map<std::string, std::vector<NodeEntry> > graph_;
+	  /* \brief A simple pool to contain the tensor for each node in the graph. */
+	  std::vector<NDArray> data_entry_;
+	  /* \brief A mapping from node id to op name. */
+	  std::vector<std::string> op_id_;
+	};
+
+In particular, there are some functions derived from ``ModuleNode`` that we must implement in ``ExampleJsonModule``:
+
+* Constructor: The constructor of this class should accept a subgraph (in your representation), process and store it in any format you like. The saved subgraph could be used by the following two functions.
+
+* ``GetFunction``: This is the most important function in this class. When TVM runtime wants to execute a subgraph with your compiler tag, TVM runtime invokes this function from your customized runtime module. It provides the function name as well as runtime arguments, and ``GetFunction`` should return a packed function implementation for TVM runtime to execute.
+
+* ``SaveToBinary`` and ``LoadFromBinary``: ``SaveToBinary`` serialize the runtime module to a binary format for later deployment. This function will be called by TVM when users use ``export_library`` API. On the other hand, since we are now using our own graph representation, we have to make sure that ``LoadFromBinary`` is able to construct the same runtime module by taking the serialized binary generated by ``SaveToBinary``.
+
+* ``GetSource`` (optional): If you would like to see the generated ExampleJSON code, you can implement this function to dump it; otherwise you can skip the implementation.
+
+Other functions and class variables will be introduced along with the implementation of above must-have functions.
+
+Implement Constructor
+---------------------
+
+.. code-block:: c++
+
+    explicit ExampleJsonModule(std::string graph_json) {
+      this->graph_json_ = graph_json;
+      ParseJson(this->graph_json_);
+    }
+
+Then, we implement ``ParseJson`` to parse a subgraph in ExampleJSON format and construct a graph in memory for later usage. Since we do not support subgraph with branches in this example, we simply use an array to store every nodes in a subgraph in order.
 
 Review comment:
   Add a summary section to show a checklist of things todo when adding a customized runtime and compiler

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services