You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/01/27 02:43:56 UTC

[GitHub] [tvm] huajsj opened a new pull request #7345: pipeline graph patch 1

huajsj opened a new pull request #7345:
URL: https://github.com/apache/tvm/pull/7345


   Issue:
   SOC hardware plarform have multiple types compute chipset like
   GPU,FPGA,APU,RPU etc, there is a requirement that use these compute
   unit in parallel to reach best performance.
   
   Solution:
   In these pipeline solution, we first split the compute graph into
   a group of subgraph, then run these subgraph in a pipeline module
   to make the GPU/FPGA/APU/RPU parallel running become possible.
   
   this patch is to address compute graph split issue
   
   Thanks for contributing to TVM!   Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] huajsj commented on a change in pull request #7345: [WIP]pipeline graph patch 1

Posted by GitBox <gi...@apache.org>.
huajsj commented on a change in pull request #7345:
URL: https://github.com/apache/tvm/pull/7345#discussion_r613658797



##########
File path: python/tvm/relay/analysis/analysis.py
##########
@@ -448,3 +449,71 @@ def get_calibration_data(mod, data):
         calib_data[gvar] = value
 
     return calib_data
+
+
+def pipeline_graph(expr, indexs):
+    """Split Graph Into A Group Of Subgraph
+    Parameters
+    ----------
+    expr : tvm.relay.Expr
+    indexs : Array[int]
+
+    Returns
+    -------
+    ret : Array[tvm.relay.IRModule]
+    """
+
+    def run_opt_pass(expr, opt_pass):
+        """Exectue a relay pass"""
+        assert isinstance(opt_pass, tvm.transform.Pass)
+        mod = tvm.IRModule.from_expr(expr)
+        mod = tvm.relay.transform.InferType()(mod)
+        mod = opt_pass(mod)
+        entry = mod["main"]
+        return entry if isinstance(expr, tvm.relay.Function) else entry.body
+
+    def _operator_idx_inc(expr, operator_current_idx):
+        """Increase operator index"""
+        if not isinstance(expr, tvm.relay.expr.Constant):
+            operator_current_idx = operator_current_idx + 1
+
+        return operator_current_idx
+
+    def _recursion(anf, operator_indx, pipeline_mods, indexs):
+        """Do the split work"""
+        if isinstance(anf, tvm.relay.Function):
+            return tvm.relay.Function(
+                anf.params,
+                _recursion(anf.body, operator_indx, pipeline_mods, indexs),
+                anf.ret_type,
+                anf.type_params,
+                anf.attrs,
+            )
+        if isinstance(anf, tvm.relay.expr.Let):
+            value = anf.value
+            operator_indx = _operator_idx_inc(value, operator_indx)
+            if isinstance(value, tvm.relay.expr.Call):
+                if isinstance(value.op, tvm.ir.Op):
+                    if indexs and operator_indx == indexs[0]:
+                        indexs.pop(0)
+                        ann = _recursion(anf.body, operator_indx, pipeline_mods, indexs)
+                        ann = run_opt_pass(ann, transform.ToGraphNormalForm())
+                        mod = tvm.IRModule.from_expr(ann)
+                        pipeline_mods.insert(0, mod)
+                        return tvm.relay.expr.Let(anf.var, value, anf.var)

Review comment:
       for expression a(b(c(d(e)))) and indexes [0, 1, 2, 3], the result is a, b, c, d, e, that means the the first index return 
   [<express start>, <first index>], the last index return [<last index>, <express end>], comment added.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] huajsj commented on pull request #7345: [WIP]pipeline graph patch 1

Posted by GitBox <gi...@apache.org>.
huajsj commented on pull request #7345:
URL: https://github.com/apache/tvm/pull/7345#issuecomment-820683884


   @ZihengJiang @giuseros , thanks for the review, as we discussed in https://discuss.tvm.apache.org/t/compute-graph-pipeline/8957, I work a new patch with runtime support to make whole feature be self-contained, all review comments already get address and would get carry to new PR, close this PR now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] huajsj closed pull request #7345: [WIP]pipeline graph patch 1

Posted by GitBox <gi...@apache.org>.
huajsj closed pull request #7345:
URL: https://github.com/apache/tvm/pull/7345


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] giuseros commented on a change in pull request #7345: pipeline graph patch 1

Posted by GitBox <gi...@apache.org>.
giuseros commented on a change in pull request #7345:
URL: https://github.com/apache/tvm/pull/7345#discussion_r565990717



##########
File path: python/tvm/relay/analysis/analysis.py
##########
@@ -448,3 +449,71 @@ def get_calibration_data(mod, data):
         calib_data[gvar] = value
 
     return calib_data
+
+
+def pipeline_graph(expr, indexs):
+    """Split Graph Into A Group Of Subgraph
+    Parameters
+    ----------
+    expr : tvm.relay.Expr
+    indexs : Array[int]
+
+    Returns
+    -------
+    ret : Array[tvm.relay.IRModule]
+    """
+
+    def run_opt_pass(expr, opt_pass):
+        """Exectue a relay pass"""
+        assert isinstance(opt_pass, tvm.transform.Pass)
+        mod = tvm.IRModule.from_expr(expr)
+        mod = tvm.relay.transform.InferType()(mod)
+        mod = opt_pass(mod)
+        entry = mod["main"]
+        return entry if isinstance(expr, tvm.relay.Function) else entry.body
+
+    def _operator_idx_inc(expr, operator_current_idx):
+        """Increase operator index"""
+        if not isinstance(expr, tvm.relay.expr.Constant):
+            operator_current_idx = operator_current_idx + 1
+
+        return operator_current_idx
+
+    def _recursion(anf, operator_indx, pipeline_mods, indexs):
+        """Do the split work"""
+        if isinstance(anf, tvm.relay.Function):
+            return tvm.relay.Function(
+                anf.params,
+                _recursion(anf.body, operator_indx, pipeline_mods, indexs),
+                anf.ret_type,
+                anf.type_params,
+                anf.attrs,
+            )
+        if isinstance(anf, tvm.relay.expr.Let):
+            value = anf.value
+            operator_indx = _operator_idx_inc(value, operator_indx)
+            if isinstance(value, tvm.relay.expr.Call):
+                if isinstance(value.op, tvm.ir.Op):
+                    if indexs and operator_indx == indexs[0]:
+                        indexs.pop(0)
+                        ann = _recursion(anf.body, operator_indx, pipeline_mods, indexs)
+                        ann = run_opt_pass(ann, transform.ToGraphNormalForm())
+                        mod = tvm.IRModule.from_expr(ann)
+                        pipeline_mods.insert(0, mod)
+                        return tvm.relay.expr.Let(anf.var, value, anf.var)

Review comment:
       So basically if you have a(b(c(d))) and indexes are [0,1,2,3] you get separate modules for a,b,c,d. Is this correct? If so, would you mind adding some high level comments to help follow the code?

##########
File path: tests/python/relay/test_analysis_pipeline.py
##########
@@ -0,0 +1,69 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import numpy as np
+import tvm
+import tvm.testing
+from tvm import relay
+from tvm.relay import transform
+from tvm.contrib import graph_runtime
+from tvm.relay.analysis import pipeline_graph
+
+
+def run_module(mod, ctx, dname, data):
+    with relay.build_config(opt_level=3):
+        graph, lib, params = relay.build(mod, "llvm")
+
+    m = graph_runtime.create(graph, lib, ctx)
+    m.set_input(dname, data)
+    m.set_input(**params)
+    m.run()
+    output = m.get_output(0).asnumpy()
+    return output
+
+
+def get_network():
+    dshape = (3, 3)
+    mvalue = np.full((1), 4).astype("float32")
+    mmv = relay.Constant(tvm.nd.array(mvalue))
+    mv = relay.Constant(tvm.nd.array(mvalue))
+    mv2 = relay.Constant(tvm.nd.array(mvalue))
+    mv3 = relay.Constant(tvm.nd.array(mvalue))
+    data = relay.var("data", relay.TensorType(dshape, "float32"))
+    net = relay.multiply(data, mv)
+    net = relay.add(net, mv2)
+    net = relay.subtract(net, mv3)
+    net = relay.add(net, mv3)
+    func = relay.Function([data], net)
+    mod = tvm.IRModule.from_expr(func)
+    return mod, dshape
+
+
+mod, dshape = get_network()
+pl = [1, 2]

Review comment:
       Should we test also more exotic cases like [0 3 5]? Would this still work (I think so, but better add a test)

##########
File path: python/tvm/relay/analysis/analysis.py
##########
@@ -448,3 +449,71 @@ def get_calibration_data(mod, data):
         calib_data[gvar] = value
 
     return calib_data
+
+
+def pipeline_graph(expr, indexs):
+    """Split Graph Into A Group Of Subgraph
+    Parameters
+    ----------
+    expr : tvm.relay.Expr
+    indexs : Array[int]

Review comment:
       Minor comment: what about "indices" instead of "indexs"?

##########
File path: python/tvm/relay/analysis/analysis.py
##########
@@ -448,3 +449,71 @@ def get_calibration_data(mod, data):
         calib_data[gvar] = value
 
     return calib_data
+
+
+def pipeline_graph(expr, indexs):
+    """Split Graph Into A Group Of Subgraph
+    Parameters
+    ----------
+    expr : tvm.relay.Expr
+    indexs : Array[int]
+
+    Returns
+    -------
+    ret : Array[tvm.relay.IRModule]
+    """
+
+    def run_opt_pass(expr, opt_pass):
+        """Exectue a relay pass"""
+        assert isinstance(opt_pass, tvm.transform.Pass)
+        mod = tvm.IRModule.from_expr(expr)
+        mod = tvm.relay.transform.InferType()(mod)
+        mod = opt_pass(mod)
+        entry = mod["main"]
+        return entry if isinstance(expr, tvm.relay.Function) else entry.body
+
+    def _operator_idx_inc(expr, operator_current_idx):
+        """Increase operator index"""
+        if not isinstance(expr, tvm.relay.expr.Constant):
+            operator_current_idx = operator_current_idx + 1
+
+        return operator_current_idx
+
+    def _recursion(anf, operator_indx, pipeline_mods, indexs):
+        """Do the split work"""

Review comment:
       Could we give a more detailed description of what this function is doing?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] huajsj edited a comment on pull request #7345: [WIP]pipeline graph patch 1

Posted by GitBox <gi...@apache.org>.
huajsj edited a comment on pull request #7345:
URL: https://github.com/apache/tvm/pull/7345#issuecomment-820683884


   @ZihengJiang @giuseros , thanks for the review, as we discussed in https://discuss.tvm.apache.org/t/compute-graph-pipeline/8957, I would work a new patch with runtime support to make whole feature be self-contained, all review comments already get address and would get carry to new PR, close this PR now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org